picture
RJR-logo

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

About | BLOGS | Portfolio | Misc | Recommended | What's New | What's Hot

icon

Bibliography Options Menu

icon
QUERY RUN:
22 Aug 2025 at 01:42
HITS:
4166
PAGE OPTIONS:
Hide Abstracts   |   Hide Additional Links
NOTE:
Long bibliographies are displayed in blocks of 100 citations at a time. At the end of each block there is an option to load the next block.

Bibliography on: Cloud Computing

RJR-3x

Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About:  RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE

ESP: PubMed Auto Bibliography 22 Aug 2025 at 01:42 Created: 

Cloud Computing

Wikipedia: Cloud Computing Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Cloud computing relies on sharing of resources to achieve coherence and economies of scale. Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand. Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models. The possibility of unexpected operating expenses is especially problematic in a grant-funded research institution, where funds may not be readily available to cover significant cost overruns.

Created with PubMed® Query: ( cloud[TIAB] AND (computing[TIAB] OR "amazon web services"[TIAB] OR google[TIAB] OR "microsoft azure"[TIAB]) ) NOT pmcbook NOT ispreviousversion

Citations The Papers (from PubMed®)

-->

RevDate: 2025-08-21
CmpDate: 2025-08-21

Cui D, Peng Z, Li K, et al (2025)

An novel cloud task scheduling framework using hierarchical deep reinforcement learning for cloud computing.

PloS one, 20(8):e0329669 pii:PONE-D-24-45416.

With the increasing popularity of cloud computing services, their large and dynamic load characteristics have rendered task scheduling an NP-complete problem.To address the problem of large-scale task scheduling in a cloud computing environment, this paper proposes a novel cloud task scheduling framework using hierarchical deep reinforcement learning (DRL) to address the challenges of large-scale task scheduling in cloud computing. The framework defines a set of virtual machines (VMs) as a VM cluster and employs hierarchical scheduling to allocate tasks first to the cluster and then to individual VMs. The scheduler, designed using DRL, adapts to dynamic changes in the cloud environments by continuously learning and updating network parameters. Experiments demonstrate that it skillfully balances cost and performance. In low-load situations, costs are reduced by using low-cost nodes within the Service Level Agreement (SLA) range; in high-load situations, resource utilization is improved through load balancing. Compared with classical heuristic algorithms, it effectively optimizes load balancing, cost, and overdue time, achieving a 10% overall improvement. The experimental results demonstrate that this approach effectively balances cost and performance, optimizing objectives such as load balance, cost, and overdue time. One potential shortcoming of the proposed hierarchical deep reinforcement learning (DRL) framework for cloud task scheduling is its complexity and computational overhead. Implementing and maintaining a DRL-based scheduler requires significant computational resources and expertise in machine learning. There are still shortcomings in the method used in this study. First, the continuous learning and updating of network parameters might introduce latency, which could impact real-time task scheduling efficiency. Furthermore, the framework's performance heavily depends on the quality and quantity of training data, which might be challenging to obtain and maintain in a dynamic cloud environment.

RevDate: 2025-08-20

Manhary FN, Mohamed MH, M Farouk (2025)

A scalable machine learning strategy for resource allocation in database.

Scientific reports, 15(1):30567.

Modern cloud computing systems require intelligent resource allocation strategies that balance quality-of-service (QoS), operational costs, and energy sustainability. Existing deep Q-learning (DQN) methods suffer from sample inefficiency, centralization bottlenecks, and reactive decision-making during workload spikes. Transformer-based forecasting models such as Temporal Fusion Transformer (TFT) offer improved accuracy but introduce computational overhead, limiting real-time deployment. We propose LSTM-MARL-Ape-X, a novel framework integrating bidirectional Long Short-Term Memory (BiLSTM) for workload forecasting with Multi-Agent Reinforcement Learning (MARL) in a distributed Ape-X architecture. This approach enables proactive, decentralized, and scalable resource management through three innovations: high-accuracy forecasting using BiLSTM with feature-wise attention, variance-regularized credit assignment for stable multi-agent coordination, and faster convergence via adaptive prioritized replay. Experimental validation on real-world traces demonstrates 94.6% SLA compliance, 22% reduction in energy consumption, and linear scalability to over 5,000 nodes with sub-100 ms decision latency. The framework converges 3.2× faster than uniform sampling baselines and outperforms transformer-based models in both accuracy and inference speed. Unlike decoupled prediction-action frameworks, our method provides end-to-end optimization, enabling robust and sustainable cloud orchestration at scale.

RevDate: 2025-08-19

Park SY, Takayama C, Ryu J, et al (2025)

Design and evaluation of next-generation HIV genotyping for detection of resistance mutations to 28 antiretroviral drugs across five major classes including lenacapavir.

Clinical infectious diseases : an official publication of the Infectious Diseases Society of America pii:8237671 [Epub ahead of print].

BACKGROUND: The emergence and spread of HIV drug-resistant strains present a major barrier to effective lifelong Antiretroviral Therapy (ART). The anticipated rise in long-acting subcutaneous lenacapavir (LEN) use, along with the increased risk of transmitted resistance and Pre-Exposure Prophylaxis (PrEP)-associated resistance, underscores the urgent need for advanced genotyping methods to enhance clinical care and prevention strategies.

METHODS: We developed the Portable HIV Genotyping (PHG) platform which combines cost-effective next-generation sequencing with cloud computing to screen for resistance to 28 antiretroviral drugs across five major classes, including LEN. We analyzed three study cohorts and compared our drug resistance findings against standard care testing results and high-fidelity sequencing data obtained through unique molecular identifier (UMI) labeling.

RESULTS: PHG identified two major LEN-resistance mutations in one participant, confirmed by an additional independent sequencing run. Across three study cohorts, PHG consistently detected the same drug resistance mutations as standard care genotyping and high-fidelity UMI-labeling in most tested specimens. PHG's 10% limit of detection minimized false positives and enabled identification of minority variants less than 20% frequency, pointing to underdiagnosis of drug resistance in clinical care. Furthermore, PHG identified linked cross-class resistance mutations, confirmed by UMI-labeling, including linked cross-resistance in a participant who reported use of long-acting cabotegravir (CAB) and rilpivirine (RPV). We also observed multi-year persistence of linked cross-class resistance mutations.

CONCLUSIONS: PHG demonstrates significant improvements over standard care HIV genotyping, offering deeper insights into LEN-resistance, minority variants, and cross-class resistance using a low-cost high-throughput portable sequencing technology and publicly available cloud computing.

RevDate: 2025-08-17

Wu J, Bian Z, Gao H, et al (2025)

A Blockchain-Based Secure Data Transaction and Privacy Preservation Scheme in IoT System.

Sensors (Basel, Switzerland), 25(15):.

With the explosive growth of Internet of Things (IoT) devices, massive amounts of heterogeneous data are continuously generated. However, IoT data transactions and sharing face multiple challenges such as limited device resources, untrustworthy network environment, highly sensitive user privacy, and serious data silos. How to achieve fine-grained access control and privacy protection for massive devices while ensuring secure and reliable data circulation has become a key issue that needs to be urgently addressed in the current IoT field. To address the above challenges, this paper proposes a blockchain-based data transaction and privacy protection framework. First, the framework builds a multi-layer security architecture that integrates blockchain and IPFS and adapts to the "end-edge-cloud" collaborative characteristics of IoT. Secondly, a data sharing mechanism that takes into account both access control and interest balance is designed. On the one hand, the mechanism uses attribute-based encryption (ABE) technology to achieve dynamic and fine-grained access control for massive heterogeneous IoT devices; on the other hand, it introduces a game theory-driven dynamic pricing model to effectively balance the interests of both data supply and demand. Finally, in response to the needs of confidential analysis of IoT data, a secure computing scheme based on CKKS fully homomorphic encryption is proposed, which supports efficient statistical analysis of encrypted sensor data without leaking privacy. Security analysis and experimental results show that this scheme is secure under standard cryptographic assumptions and can effectively resist common attacks in the IoT environment. Prototype system testing verifies the functional completeness and performance feasibility of the scheme, providing a complete and effective technical solution to address the challenges of data integrity, verifiable transactions, and fine-grained access control, while mitigating the reliance on a trusted central authority in IoT data sharing.

RevDate: 2025-08-18

Chapman OS, Sridhar S, Chow EY, et al (2025)

Extrachromosomal DNA associates with poor survival across a broad spectrum of childhood solid tumors.

medRxiv : the preprint server for health sciences.

Circular extrachromosomal DNA (ecDNA) is a common form of oncogene amplification in aggressive cancers. The frequency and diversity of ecDNA has been catalogued in adult and some childhood cancers; however, its role in most pediatric cancers is not well-understood. To address this gap, we accessed large pediatric cancer genomics data repositories and identified ecDNA from whole genome sequencing data using cloud computing. This retrospective cohort comprises 3,631 solid tumor biopsies from 2,968 patients covering all major childhood solid tumor types. Aggressive tumor types had particularly high incidences of ecDNA. Pediatric patients whose tumors harbored extrachromosomal DNA had significantly poorer five-year overall survival than children whose tumors contained only chromosomal amplifications. We catalogue known and potentially novel oncogenes recurrently amplified on ecDNA and show that ecDNA often evolves during disease progression. These results highlight patient populations that could potentially benefit from future ecDNA-directed therapies. To facilitate discovery, we developed an interactive catalogue of ecDNA in childhood cancer at https://ccdi-ecdna.org/.

RevDate: 2023-11-10

Vahidy F, Jones SL, Tano ME, et al (2021)

Rapid Response to Drive COVID-19 Research in a Learning Health Care System: Rationale and Design of the Houston Methodist COVID-19 Surveillance and Outcomes Registry (CURATOR).

JMIR medical informatics, 9(2):e26773.

BACKGROUND: The COVID-19 pandemic has exacerbated the challenges of meaningful health care digitization. The need for rapid yet validated decision-making requires robust data infrastructure. Organizations with a focus on learning health care (LHC) systems tend to adapt better to rapidly evolving data needs. Few studies have demonstrated a successful implementation of data digitization principles in an LHC context across health care systems during the COVID-19 pandemic.

OBJECTIVE: We share our experience and provide a framework for assembling and organizing multidisciplinary resources, structuring and regulating research needs, and developing a single source of truth (SSoT) for COVID-19 research by applying fundamental principles of health care digitization, in the context of LHC systems across a complex health care organization.

METHODS: Houston Methodist (HM) comprises eight tertiary care hospitals and an expansive primary care network across Greater Houston, Texas. During the early phase of the pandemic, institutional leadership envisioned the need to streamline COVID-19 research and established the retrospective research task force (RRTF). We describe an account of the structure, functioning, and productivity of the RRTF. We further elucidate the technical and structural details of a comprehensive data repository-the HM COVID-19 Surveillance and Outcomes Registry (CURATOR). We particularly highlight how CURATOR conforms to standard health care digitization principles in the LHC context.

RESULTS: The HM COVID-19 RRTF comprises expertise in epidemiology, health systems, clinical domains, data sciences, information technology, and research regulation. The RRTF initially convened in March 2020 to prioritize and streamline COVID-19 observational research; to date, it has reviewed over 60 protocols and made recommendations to the institutional review board (IRB). The RRTF also established the charter for CURATOR, which in itself was IRB-approved in April 2020. CURATOR is a relational structured query language database that is directly populated with data from electronic health records, via largely automated extract, transform, and load procedures. The CURATOR design enables longitudinal tracking of COVID-19 cases and controls before and after COVID-19 testing. CURATOR has been set up following the SSoT principle and is harmonized across other COVID-19 data sources. CURATOR eliminates data silos by leveraging unique and disparate big data sources for COVID-19 research and provides a platform to capitalize on institutional investment in cloud computing. It currently hosts deeply phenotyped sociodemographic, clinical, and outcomes data of approximately 200,000 individuals tested for COVID-19. It supports more than 30 IRB-approved protocols across several clinical domains and has generated numerous publications from its core and associated data sources.

CONCLUSIONS: A data-driven decision-making strategy is paramount to the success of health care organizations. Investment in cross-disciplinary expertise, health care technology, and leadership commitment are key ingredients to foster an LHC system. Such systems can mitigate the effects of ongoing and future health care catastrophes by providing timely and validated decision support.

RevDate: 2023-11-11
CmpDate: 2016-10-17

Dinov ID (2016)

Methodological challenges and analytic opportunities for modeling and interpreting Big Healthcare Data.

GigaScience, 5:12.

Managing, processing and understanding big healthcare data is challenging, costly and demanding. Without a robust fundamental theory for representation, analysis and inference, a roadmap for uniform handling and analyzing of such complex data remains elusive. In this article, we outline various big data challenges, opportunities, modeling methods and software techniques for blending complex healthcare data, advanced analytic tools, and distributed scientific computing. Using imaging, genetic and healthcare data we provide examples of processing heterogeneous datasets using distributed cloud services, automated and semi-automated classification techniques, and open-science protocols. Despite substantial advances, new innovative technologies need to be developed that enhance, scale and optimize the management and processing of large, complex and heterogeneous data. Stakeholder investments in data acquisition, research and development, computational infrastructure and education will be critical to realize the huge potential of big data, to reap the expected information benefits and to build lasting knowledge assets. Multi-faceted proprietary, open-source, and community developments will be essential to enable broad, reliable, sustainable and efficient data-driven discovery and analytics. Big data will affect every sector of the economy and their hallmark will be 'team science'.

RevDate: 2025-08-18

Isik MS, Parente L, Consoli D, et al (2025)

Light use efficiency (LUE) based bimonthly gross primary productivity (GPP) for global grasslands at 30 m spatial resolution (2000-2022).

PeerJ, 13:e19774 pii:19774.

The article describes production of a high spatial resolution (30 m) bimonthly light use efficiency (LUE) based gross primary productivity (GPP) data set representing grasslands for the period 2000 to 2022. The data set is based on using reconstructed global complete consistent bimonthly Landsat archive (400TB of data), combined with 1 km MOD11A1 temperature data and 1° CERES Photosynthetically Active Radiation (PAR). First, the LUE model was implemented by taking the biome-specific productivity factor (maximum LUE parameter) as a global constant, producing a global bimonthly (uncalibrated) productivity data for the complete land mask. Second, the GPP 30 m bimonthly maps were derived for the global grassland annual predictions and calibrating the values based on the maximum LUE factor of 0.86 gCm[-2]d[-1]MJ[-1]. The results of validation of the produced GPP estimates based on 527 eddy covariance flux towers show an R-square between 0.48-0.71 and root mean square error (RMSE) below ~2.3 gCm[-2]d[-1] for all land cover classes. Using a total of 92 flux towers located in grasslands, the validation of the GPP product calibrated for the grassland biome revealed an R-square between 0.51-0.70 and an RMSE smaller than ~2 gCm[-2]d[-1]. The final time-series of maps (uncalibrated and grassland GPP) are available as bimonthly (daily estimates in units of gCm[-2]d[-1]) and annual (daily average accumulated by 365 days in units of gCm[-2]yr[-1]) in Cloud-Optimized GeoTIFF (~23TB in size) as open data (CC-BY license). The recommended uses of data include: trend analysis e.g., to determine where are the largest losses in GPP and which could be an indicator of potential land degradation, crop yield mapping and for modeling GHG fluxes at finer spatial resolution. Produced maps are available via SpatioTemporal Asset Catalog (http://stac.openlandmap.org) and Google Earth Engine.

RevDate: 2025-08-17

Periasamy JK, Prabhakar S, Vanathi A, et al (2025)

Enhancing cloud security and deduplication efficiency with SALIGP and cryptographic authentication.

Scientific reports, 15(1):30112.

Cloud computing enables data storage and application deployment over the internet, offering benefits such as mobility, resource pooling, and scalability. However, it also presents major challenges, particularly in managing shared resources, ensuring data security, and controlling distributed applications in the absence of centralized oversight. One key issue is data duplication, which leads to inefficient storage, increased costs, and potential privacy and security risks. To address these challenges, this study proposes a post-quantum mechanism that enhances both cloud security and deduplication efficiency. The proposed SALIGP method leverages Genetic Programming and a Geometric Approach, integrating Bloom Filters for efficient duplication detection. The Cryptographic Deduplication Authentication Scheme (CDAS) is introduced, which utilizes blockchain technology to securely store and retrieve files, while ensuring that encrypted access is limited to authorized users. This dual-layered approach effectively resolves the issue of redundant data in dynamic, distributed cloud environments. Experimental results demonstrate that the proposed method significantly reduces computation and communication times at various network nodes, particularly in key generation and group operations. Encrypting user data prior to outsourcing ensures enhanced privacy protection during the deduplication process. Overall, the proposed system leads to substantial improvements in cloud data security, reliability, and storage efficiency, offering a scalable and secure framework for modern cloud computing environments.

RevDate: 2025-08-18

Wang J, Li K, Han T, et al (2025)

Long-term Land Cover Dataset of the Mongolian Plateau Based on Multi-source Data and Rich Sample Annotations.

Scientific data, 12(1):1434.

The Mongolian Plateau (MP), with its unique geographical landscape and nomadic cultural features, is vital to regional ecological security and sustainable development in North Asia. Existing global land cover products often lack the classification specificity and temporal continuity required for MP-specific studies, particularly for grassland and bare area subtypes. To address this gap, a new land cover classification was designed for MP, which includes 14 categories: forests, shrubs, meadows, real steppes, dry steppes, desert steppes, wetlands, water, croplands, built-up land, barren land, desert, sand, and ice. Using machine learning and cloud computing, the novel dataset spanning the period of 1990-2020. Random Forest algorithm was employed to integrate training samples with multisource features for landcover classification, and a two-step Random Forest classification strategy to improve detail land cover results in transition regions. This process involved accurately annotating 64,345 sample points within a gridded framework. The resulting dataset achieved an overall accuracy of 83.6%. This land cover product and its approach has potential for application in vast arid and semi-arid areas.

RevDate: 2025-08-14

Ahmad T, Schuchart J, Al Ars Z, et al (2025)

GenMPI: Cluster Scalable Variant Calling for Short/Long Reads Sequencing Data.

IEEE transactions on computational biology and bioinformatics, PP: [Epub ahead of print].

Rapid technological advancements in sequencing technologies allow producing cost effective and high volume sequencing data. Processing this data for real-time clinical diagnosis is potentially time-consuming if done on a single computing node. This work presents a complete variant calling workflow, implemented using the Message Passing Interface (MPI) to leverage the benefits of high bandwidth interconnects. This solution (GenMPI) is portable and flexible, meaning it can be deployed to any private or public cluster/cloud infrastructure. Any alignment or variant calling application can be used with minimal adaptation. To achieve high performance, compressed input data can be streamed in parallel to alignment applications while uncompressed data can use internal file seek functionality to eliminate the bottleneck of streaming input data from a single node. Alignment output can be directly stored in multiple chromosome-specific SAM files or a single SAM file. After alignment, a distributed queue using MPI RMA (Remote Memory Access) atomic operations is created for sorting, indexing, marking of duplicates (if necessary) and variant calling applications. We ensure the accuracy of variants as compared to the original single node methods. We also show that for 300x coverage data, alignment scales almost linearly up to 64 nodes (8192 CPU cores). Overall, this work outperforms existing big data based workflows by a factor of two and is almost 20% faster than other MPI-based implementations for alignment without any extra memory overheads. Sorting, indexing, duplicate removal and variant calling is also scalable up to 8 nodes cluster. For pair-end short-reads (Illumina) data, we integrated the BWA-MEM aligner and three variant callers (GATK HaplotypeCaller, DeepVariant and Octopus), while for long-reads data, we integrated the Minimap2 aligner and three different variant callers (DeepVariant, DeepVariant with WhatsHap for phasing (PacBio) and Clair3 (ONT)). All codes and scripts are available at: https://github.com/abs-tudelft/gen-mpi.

RevDate: 2025-08-17

Liu S, Shan N, Bao X, et al (2025)

Distributed Collaborative Data Processing Framework for Unmanned Platforms Based on Federated Edge Intelligence.

Sensors (Basel, Switzerland), 25(15):.

Unmanned platforms such as unmanned aerial vehicles, unmanned ground vehicles, and autonomous underwater vehicles often face challenges of data, device, and model heterogeneity when performing collaborative data processing tasks. Existing research does not simultaneously address issues from these three aspects. To address this issue, this study designs an unmanned platform cluster architecture inspired by the cloud-edge-end model. This architecture integrates federated learning for privacy protection, leverages the advantages of distributed model training, and utilizes edge computing's near-source data processing capabilities. Additionally, this paper proposes a federated edge intelligence method (DSIA-FEI), which comprises two key components. Based on traditional federated learning, a data sharing mechanism is introduced, in which data is extracted from edge-side platforms and placed into a data sharing platform to form a public dataset. At the beginning of model training, random sampling is conducted from the public dataset and distributed to each unmanned platform, so as to mitigate the impact of data distribution heterogeneity and class imbalance during collaborative data processing in unmanned platforms. Moreover, an intelligent model aggregation strategy based on similarity measurement and loss gradient is developed. This strategy maps heterogeneous model parameters to a unified space via hierarchical parameter alignment, and evaluates the similarity between local and global models of edge devices in real-time, along with the loss gradient, to select the optimal model for global aggregation, reducing the influence of device and model heterogeneity on cooperative learning of unmanned platform swarms. This study carried out extensive validation on multiple datasets, and the experimental results showed that the accuracy of the DSIA-FEI proposed in this paper reaches 0.91, 0.91, 0.88, and 0.87 on the FEMNIST, FEAIR, EuroSAT, and RSSCN7 datasets, respectively, which is more than 10% higher than the baseline method. In addition, the number of communication rounds is reduced by more than 40%, which is better than the existing mainstream methods, and the effectiveness of the proposed method is verified.

RevDate: 2025-08-17

Cui M, Y Wang (2025)

An Effective QoS-Aware Hybrid Optimization Approach for Workflow Scheduling in Cloud Computing.

Sensors (Basel, Switzerland), 25(15):.

Workflow scheduling in cloud computing is attracting increasing attention. Cloud computing can assign tasks to available virtual machine resources in cloud data centers according to scheduling strategies, providing a powerful computing platform for the execution of workflow tasks. However, developing effective workflow scheduling algorithms to find optimal or near-optimal task-to-VM allocation solutions that meet users' specific QoS requirements still remains an open area of research. In this paper, we propose a hybrid QoS-aware workflow scheduling algorithm named HLWOA to address the problem of simultaneously minimizing the completion time and execution cost of workflow scheduling in cloud computing. First, the workflow scheduling problem in cloud computing is modeled as a multi-objective optimization problem. Then, based on the heterogeneous earliest finish time (HEFT) heuristic optimization algorithm, tasks are reverse topologically sorted and assigned to virtual machines with the earliest finish time to construct an initial workflow task scheduling sequence. Furthermore, an improved Whale Optimization Algorithm (WOA) based on Lévy flight is proposed. The output solution of HEFT is used as one of the initial population solutions in WOA to accelerate the convergence speed of the algorithm. Subsequently, a Lévy flight search strategy is introduced in the iterative optimization phase to avoid the algorithm falling into local optimal solutions. The proposed HLWOA is evaluated on the WorkflowSim platform using real-world scientific workflows (Cybershake and Montage) with different task scales (100 and 1000). Experimental results demonstrate that HLWOA outperforms HEFT, HEPGA, and standard WOA in both makespan and cost, with normalized fitness values consistently ranking first.

RevDate: 2025-08-17

Mtowe DP, Long L, DM Kim (2025)

Low-Latency Edge-Enabled Digital Twin System for Multi-Robot Collision Avoidance and Remote Control.

Sensors (Basel, Switzerland), 25(15):.

This paper proposes a low-latency and scalable architecture for Edge-Enabled Digital Twin networked control systems (E-DTNCS) aimed at multi-robot collision avoidance and remote control in dynamic and latency-sensitive environments. Traditional approaches, which rely on centralized cloud processing or direct sensor-to-controller communication, are inherently limited by excessive network latency, bandwidth bottlenecks, and a lack of predictive decision-making, thus constraining their effectiveness in real-time multi-agent systems. To overcome these limitations, we propose a novel framework that seamlessly integrates edge computing with digital twin (DT) technology. By performing localized preprocessing at the edge, the system extracts semantically rich features from raw sensor data streams, reducing the transmission overhead of the original data. This shift from raw data to feature-based communication significantly alleviates network congestion and enhances system responsiveness. The DT layer leverages these extracted features to maintain high-fidelity synchronization with physical robots and to execute predictive models for proactive collision avoidance. To empirically validate the framework, a real-world testbed was developed, and extensive experiments were conducted with multiple mobile robots. The results revealed a substantial reduction in collision rates when DT was deployed, and further improvements were observed with E-DTNCS integration due to significantly reduced latency. These findings confirm the system's enhanced responsiveness and its effectiveness in handling real-time control tasks. The proposed framework demonstrates the potential of combining edge intelligence with DT-driven control in advancing the reliability, scalability, and real-time performance of multi-robot systems for industrial automation and mission-critical cyber-physical applications.

RevDate: 2025-08-17

Stojanović R, Đurković J, Vukmirović M, et al (2025)

Medical Data over Sound-CardiaWhisper Concept.

Sensors (Basel, Switzerland), 25(15):.

Data over sound (DoS) is an established technique that has experienced a resurgence in recent years, finding applications in areas such as contactless payments, device pairing, authentication, presence detection, toys, and offline data transfer. This study introduces CardiaWhisper, a system that extends the DoS concept to the medical domain by using a medical data-over-sound (MDoS) framework. CardiaWhisper integrates wearable biomedical sensors with home care systems, edge or IoT gateways, and telemedical networks or cloud platforms. Using a transmitter device, vital signs such as ECG (electrocardiogram) signals, PPG (photoplethysmogram) signals, RR (respiratory rate), and ACC (acceleration/movement) are sensed, conditioned, encoded, and acoustically transmitted to a nearby receiver-typically a smartphone, tablet, or other gadget-and can be further relayed to edge and cloud infrastructures. As a case study, this paper presents the real-time transmission and processing of ECG signals. The transmitter integrates an ECG sensing module, an encoder (either a PLL-based FM modulator chip or a microcontroller), and a sound emitter in the form of a standard piezoelectric speaker. The receiver, in the form of a mobile phone, tablet, or desktop computer, captures the acoustic signal via its built-in microphone and executes software routines to decode the data. It then enables a range of control and visualization functions for both local and remote users. Emphasis is placed on describing the system architecture and its key components, as well as the software methodologies used for signal decoding on the receiver side, where several algorithms are implemented using open-source, platform-independent technologies, such as JavaScript, HTML, and CSS. While the main focus is on the transmission of analog data, digital data transmission is also illustrated. The CardiaWhisper system is evaluated across several performance parameters, including functionality, complexity, speed, noise immunity, power consumption, range, and cost-efficiency. Quantitative measurements of the signal-to-noise ratio (SNR) were performed in various realistic indoor scenarios, including different distances, obstacles, and noise environments. Preliminary results are presented, along with a discussion of design challenges, limitations, and feasible applications. Our experience demonstrates that CardiaWhisper provides a low-power, eco-friendly alternative to traditional RF or Bluetooth-based medical wearables in various applications.

RevDate: 2025-08-16

Cui G, Zhang W, Xu W, et al (2025)

Efficient workflow scheduling using an improved multi-objective memetic algorithm in cloud-edge-end collaborative framework.

Scientific reports, 15(1):29754 pii:10.1038/s41598-025-08691-y.

With the rapid advancement of large-scale model technologies, AI agent frameworks built on foundation models have become a central focus of artificial-intelligence research. In cloud-edge-end collaborative computing frameworks, efficient workflow scheduling is essential to reducing both server energy consumption and overall makespan. This paper addresses this challenge by proposing an Improved Multi-Objective Memetic Algorithm (IMOMA) that simultaneously optimizes energy consumption and makespan. First, a multi-objective optimization model incorporating task execution constraints and priority constraints is developed, and complexity analysis confirms its NP-hard nature. Second, the IMOMA algorithm enhances population diversity through dynamic opposition-based learning, introduces local search operators tailored for bi-objective optimization, and maintains Pareto optimal solutions via an elite archive. A dynamic selection mechanism based on operator historical performance and an adaptive local search triggering strategy effectively balance global exploration and local exploitation capabilities. Experimental results on 10 standard datasets demonstrate that IMOMA achieves improvements of 93%, 7%, and 19% in hypervolume and 58%, 1%, and 23% in inverted generational distance compared to MOPSO, NSGA-II, and SPEA-II algorithms. Additionally, ablation experiments reveal the influence mechanisms of scheduling strategies, server configurations, and other constraints on optimization objectives, providing an engineering-oriented solution for real-world cloud-edge-end collaborative scenarios.

RevDate: 2025-08-16

Maray M (2025)

Intelligent deep learning for human activity recognition in individuals with disabilities using sensor based IoT and edge cloud continuum.

Scientific reports, 15(1):29640.

Aging is associated with a reduction in the capability to perform activities of everyday routine and a decline in physical activity, which affects physical and mental health. A human activity recognition (HAR) system can be a valuable tool for elderly individuals or patients, as it monitors their activities and detects any significant changes in behavior or events. When integrated with the Internet of Things (IoT), this system enables individuals to live independently while ensuring their well-being. The IoT-edge-cloud framework enhances this by processing data as close to the source as possible-either on edge devices or directly on the IoT devices themselves. However, the massive number of activity constellations and sensor configurations make the HAR problem challenging to solve deterministically. HAR involves collecting sensor data to classify diverse human activities and is a rapidly growing field. It presents valuable insights into the health, fitness, and overall wellness of individuals outside of hospital settings. Therefore, the machine learning (ML) model is mostly used for the growth of the HAR system to discover the models of human activity from the sensor data. In this manuscript, an Intelligent Deep Learning Technique for Human Activity Recognition of Persons with Disabilities using the Sensors Technology (IDLTHAR-PDST) technique is proposed. The purpose of the IDLTHAR-PDST technique is to efficiently recognize and interpret activities by leveraging sensor technology within a smart IoT-Edge-Cloud continuum. Initially, the IDLTHAR-PDST technique utilizes min-max normalization-based data pre-processing model to optimize sensor data consistency and enhance model performance. For feature subset selection, the enhanced honey badger algorithm (EHBA) model is used to effectively reduce dimensionality while retaining critical activity-related features. Finally, the deep belief network (DBN) model is employed for HAR. To exhibit the improved performance of the existing IDLTHAR-PDST model, a comprehensive simulation study is accomplished. The performance validation of the IDLTHAR-PDST model portrayed a superior accuracy value of 98.75% over existing techniques.

RevDate: 2025-08-16

Sorin V, Collins JD, Bratt AK, et al (2025)

Evaluating prompt and data perturbation sensitivity in large language models for radiology reports classification.

JAMIA open, 8(4):ooaf073.

OBJECTIVES: Large language models (LLMs) offer potential in natural language processing tasks in healthcare. Due to the need for high accuracy, understanding their limitations is essential. The purpose of this study was to evaluate the performance of LLMs in classifying radiology reports for the presence of pulmonary embolism (PE) under various conditions, including different prompt designs and data perturbations.

MATERIALS AND METHODS: In this retrospective, institutional review board approved study, we evaluated 3 Google's LLMs including Gemini-1.5-Pro, Gemini-1.5-Flash-001, and Gemini-1.5-Flash-002, in classifying 11 999 pulmonary CT angiography radiology reports for PE. Ground truth labels were determined by concordance between a computer vision-based PE detection (CVPED) algorithm and multiple LLM runs under various configurations. Discrepancies between algorithms' classifications were aggregated and manually reviewed. We evaluated the effects of prompt design, data perturbations, and repeated analyses across geographic cloud regions. Performance metrics were calculated.

RESULTS: Of 11 999 reports, 1296 (10.8%) were PE-positive. Accuracy across LLMs ranged between 0.953 and 0.996. The highest recall rate for a prompt modified after a review of the misclassified cases (up to 0.997). Few-shot prompting improved recall (up to 0.99), while chain-of-thought generally degraded performance. Gemini-1.5-Flash-002 demonstrated the highest robustness against data perturbations. Geographic cloud region variability was minimal for Gemini-1.5+-Pro, while the Flash models showed stable performance.

DISCUSSION AND CONCLUSION: LLMs demonstrated high performance in classifying radiology reports, though results varied with prompt design and data quality. These findings underscore the need for systematic evaluation and validation of LLMs for clinical applications, particularly in high-stakes scenarios.

RevDate: 2025-08-14

Hizem M, Aoueileyine MO, Belhaouari SB, et al (2025)

Sustainable E-Health: Energy-Efficient Tiny AI for Epileptic Seizure Detection via EEG.

Biomedical engineering and computational biology, 16:11795972241283101.

Tiny Artificial Intelligence (Tiny AI) is transforming resource-constrained embedded systems, particularly in e-health applications, by introducing a shift in Tiny Machine Learning (TinyML) and its integration with the Internet of Things (IoT). Unlike conventional machine learning (ML), which demands substantial processing power, TinyML strategically delegates processing requirements to the cloud infrastructure, allowing lightweight models to run on embedded devices. This study aimed to (i) Develop a TinyML workflow that details the steps for model creation and deployment in resource-constrained environments and (ii) apply the workflow to e-health applications for the real-time detection of epileptic seizures using electroencephalography (EEG) data. The methodology employs a dataset of 4097 EEG recordings per patient, each 23.5 seconds long, from 500 patients, to develop a robust and resilient model. The model was deployed using TinyML on microcontrollers tailored to hardware with limited resources. TensorFlow Lite (TFLite) efficiently runs ML models on small devices, such wearables. Simulation outcomes demonstrated significant performance, particularly in predicting epileptic seizures, with the ExtraTrees Classifier achieving a notable 99.6% Area Under the Curve (AUC) on the validation set. Because of its superior performance, the ExtraTrees Classifier was selected as the preferred model. For the optimized TinyML model, the accuracy remained practically unchanged, whereas inference time was significantly reduced. Additionally, the converted model had a smaller size of 256 KB, approximately ten times smaller, making it suitable for microcontrollers with a capacity of no more than 1 MB. These findings highlight the potential of TinyML to significantly enhance healthcare applications by enabling real-time, energy-efficient decision-making directly on local devices. This is especially valuable in scenarios with limited computing resources or during emergencies, as it reduces latency, ensures privacy, and operates without reliance on cloud infrastructure. Moreover, by reducing the size of training datasets needed, TinyML helps lower overall costs and minimizes the risk of overfitting, making it an even more cost-effective and reliable solution for healthcare innovations.

RevDate: 2025-08-12

Osório NS, LD Garma (2025)

Teaching Python with team-based learning: using cloud-based notebooks for interactive coding education.

FEBS open bio [Epub ahead of print].

Computer programming and bioinformatics are increasingly essential topics in life sciences research, facilitating the analysis of large and complex 'omics' datasets. However, they remain challenging for students without a background in mathematics or computing. To address challenges in teaching programming within biomedical education, this study integrates team-based learning (TBL) with cloud-hosted interactive Python notebooks, targeting enhanced student engagement, understanding, and collaboration in bioinformatics in two Masters level classes with 28 biomedical students in total. Four interactive notebooks covering Python basics and practical bioinformatics applications-ranging from data manipulation to multi-omics analysis-were developed. Hosted on github and integrated with Google Colaboratory, these notebooks ensured equal access and eliminated technical barriers for students with varied computing setups. During the TBL session, students were highly engaged with the notebooks, which led to a greater interest in Python and increased confidence in using bioinformatics tools. Feedback highlighted the value of TBL and interactive notebooks in enriching the learning experience, while also identifying a need for further development in bioinformatics research skills. Although more validity evidence is needed in future studies, this blended, cloud-based TBL approach effectively made bioinformatics education more accessible and engaging, suggesting its potential for enhancing computational training across life sciences.

RevDate: 2025-08-13
CmpDate: 2025-08-11

González LL, Arias-Serrano I, Villalba-Meneses F, et al (2024)

Deep learning neural network development for the classification of bacteriocin sequences produced by lactic acid bacteria.

F1000Research, 13:981.

BACKGROUND: The rise of antibiotic-resistant bacteria presents a pressing need for exploring new natural compounds with innovative mechanisms to replace existing antibiotics. Bacteriocins offer promising alternatives for developing therapeutic and preventive strategies in livestock, aquaculture, and human health. Specifically, those produced by LAB are recognized as GRAS and QPS. This study aims to develop a deep learning model specifically designed to classify bacteriocins by their LAB origin, using interpretable k-mer features and embedding vectors to enable applications in antimicrobial discover.

METHODS: We developed a deep learning neural network for binary classification of bacteriocin amino acid sequences (BacLAB vs. Non-BacLAB). Features were extracted using k-mers (k=3,5,7,15,20) and vector embeddings (EV). Ten feature combinations were tested (e.g., EV, EV+5-mers+7-mers). Sequences were filtered by length (50-2000 AA) to ensure uniformity, and class balance was maintained (24,964 BacLAB vs. 25,000 Non-BacLAB). The model was trained on Google Colab, demonstrating computational accessibility without specialized hardware.

RESULTS: The '5-mers+7-mers+EV' group achieved the best performance, with k-fold cross-validation (k=30) showing: 9.90% loss, 90.14% accuracy, 90.30% precision, 90.10% recall and F1 score. Folder 22 stood out with 8.50% loss, 91.47% accuracy, and 91.00% precision, recall, and F1 score. Five sets of 100 LAB-specific k-mers were identified, revealing conserved motifs. Despite high accuracy, sequence length variation (50-2000 AA) may bias k-mer representation, favoring longer sequences. Additionally, experimental validation is required to confirm the biological activity of predicted bacteriocins. These aspects highlight directions for future research.

CONCLUSIONS: The model developed in this study achieved consistent results with those seen in the reviewed literature. It outperformed some studies by 3-10%. Its implementation in resource-limited settings is feasible via cloud platforms like Google Colab. The identified k-mers could guide the design of synthetic antimicrobials, pending further in vitro validation.

RevDate: 2025-08-13

Gao Z, Liu D, C Zheng (2025)

Vehicle-to-everything decision optimization and cloud control based on deep reinforcement learning.

Scientific reports, 15(1):29160.

To address the challenges of decision optimization and road segment hazard assessment within complex traffic environments, and to enhance the safety and responsiveness of autonomous driving, a Vehicle-to-Everything (V2X) decision framework is proposed. This framework is structured into three modules: vehicle perception, decision-making, and execution. The vehicle perception module integrates sensor fusion techniques to capture real-time environmental data, employing deep neural networks to extract essential information. In the decision-making module, deep reinforcement learning algorithms are applied to optimize decision processes by maximizing expected rewards. Meanwhile, the road segment hazard classification module, utilizing both historical traffic data and real-time perception information, adopts a hazard evaluation model to classify road conditions automatically, providing real-time feedback to guide vehicle decision-making. Furthermore, an autonomous driving cloud control platform is designed, augmenting decision-making capabilities through centralized computing resources, enabling large-scale data analysis, and facilitating collaborative optimization. Experimental evaluations conducted within simulation environments and utilizing the KITTI dataset demonstrate that the proposed V2X decision optimization method substantially outperforms conventional decision algorithms. Vehicle decision accuracy increased by 9.0%, rising from 89.2 to 98.2%. Additionally, the response time of the cloud control system decreased from 178 ms to 127 ms, marking a reduction of 28.7%, which significantly enhances decision efficiency and real-time performance. The introduction of the road segment hazard classification model also results in a hazard assessment accuracy of 99.5%, maintaining over 95% accuracy even in high-density traffic and complex road conditions, thus illustrating strong adaptability. The results highlight the effectiveness of the proposed V2X decision optimization framework and cloud control platform in enhancing the decision quality and safety of autonomous driving systems.

RevDate: 2025-08-13

Murala DK, Prasada Rao KV, Vuyyuru VA, et al (2025)

A service-oriented microservice framework for differential privacy-based protection in industrial IoT smart applications.

Scientific reports, 15(1):29230.

The rapid advancement of key technologies such as Artificial Intelligence (AI), the Internet of Things (IoT), and edge-cloud computing has significantly accelerated the transformation toward smart industries across various domains, including finance, manufacturing, and healthcare. Edge and cloud computing offer low-cost, scalable, and on-demand computational resources, enabling service providers to deliver intelligent data analytics and real-time insights to end-users. However, despite their potential, the practical adoption of these technologies faces critical challenges, particularly concerning data privacy and security. AI models, especially in distributed environments, may inadvertently retain and leak sensitive training data, exposing users to privacy risks in the event of malicious attacks. To address these challenges, this study proposes a privacy-preserving, service-oriented microservice architecture tailored for intelligent Industrial IoT (IIoT) applications. The architecture integrates Differential Privacy (DP) mechanisms into the machine learning pipeline to safeguard sensitive information. It supports both centralised and distributed deployments, promoting flexible, scalable, and secure analytics. We developed and evaluated differentially private models, including Radial Basis Function Networks (RBFNs), across a range of privacy budgets (ɛ), using both real-world and synthetic IoT datasets. Experimental evaluations using RBFNs demonstrate that the framework maintains high predictive accuracy (up to 96.72%) with acceptable privacy guarantees for budgets [Formula: see text]. Furthermore, the microservice-based deployment achieves an average latency reduction of 28.4% compared to monolithic baselines. These results confirm the effectiveness and practicality of the proposed architecture in delivering privacy-preserving, efficient, and scalable intelligence for IIoT environments. Additionally, the microservice-based design enhanced computational efficiency and reduced latency through dynamic service orchestration. This research demonstrates the feasibility of deploying robust, privacy-conscious AI services in IIoT environments, paving the way for secure, intelligent, and scalable industrial systems.

RevDate: 2025-08-12
CmpDate: 2025-08-09

Zhang H, Zhang R, J Sun (2025)

Developing real-time IoT-based public safety alert and emergency response systems.

Scientific reports, 15(1):29056.

This paper presents the design and evaluation of a real-time IoT-based emergency response and public safety alert system tailored for rapid detection, classification, and dissemination of alerts during critical incidents. The proposed architecture combines a distributed network of heterogeneous sensors (e.g., gas, flame, vibration, and biometric), edge computing nodes (Raspberry Pi, ESP32), and cloud platforms (AWS IoT, Firebase) to ensure low-latency and high-availability operations. Communication is facilitated using secure MQTT over TLS, with fallback to LoRa for rural or low-connectivity environments. A prototype was implemented and tested across four emergency scenarios fire, traffic accident, gas leak, and medical distress within a smart city simulation testbed. The system achieved such as consistent alert latency under 450 ms, detection accuracy exceeding 95%, and scalability supporting over 12,000 concurrent devices. A comprehensive comparison against seven state-of-the-art systems confirmed superior performance in latency, reliability (99.1% alert success), and uptime (99.8%). These results underscore the system's potential for deployment in urban, industrial, and infrastructure-vulnerable environments, with future work aimed at incorporating AI-driven prediction and federated learning for cloudless operation.

RevDate: 2025-08-12

Wei X, Li R, Xiang S, et al (2025)

Smart fiber with overprinted patterns to function as chip-like multi-threshold logic switch circuit.

Nature communications, 16(1):7314.

There is a growing demand for precise health management, capable of differentially caring every inch of skin as an on-body network. For which, each network node executes not only multi-physiological sensing, but also in-situ logic computing to save cloud computing power for massive data analysis. Herein, we present a smart fiber with multilayers of overprinted patterns, composed of many small units with 0.3 mm long to function as a one-dimension (1D) array of chip-like multi-threshold logic-switch circuit. Via soft contact of curved surfaces between fiber and ink-droplet, an overprinting method is developed for stacking different layers of patterns with a line width of 75 μm in a staggered way, enabling batch production of circuit units along one long fiber. A smart fiber with high density of >3000 circuit units per meter can be woven with fiber-type sensors to construct a textile-type body-covering network, where each node serves as a computing terminal.

RevDate: 2025-08-11
CmpDate: 2025-08-08

Ramos M, Shepherd L, Sheffield NC, et al (2025)

Bioconductor's Computational Ecosystem for Genomic Data Science in Cancer.

Methods in molecular biology (Clifton, N.J.), 2932:1-46.

The Bioconductor project enters its third decade with over two thousand packages for genomic data science, over 100,000 annotation and experiment resources, and a global system for convenient distribution to researchers. Over 60,000 PubMed Central citations and terabytes of content shipped per month attest to the impact of the project on cancer genomic data science. This report provides an overview of cancer genomics resources in Bioconductor. After an overview of Bioconductor project principles, we address exploration of institutionally curated cancer genomics data such as TCGA. We then review genomic annotation and ontology resources relevant to cancer and then briefly survey analytical workflows addressing specific topics in cancer genomics. Concluding sections cover how new software and data resources are brought into the ecosystem and how the project is tackling needs for training of the research workforce. Bioconductor's strategies for supporting methods developers and researchers in cancer genomics are evolving along with experimental and computational technologies. All the tools described in this report are backed by regularly maintained learning resources that can be used locally or in cloud computing environments.

RevDate: 2025-08-10
CmpDate: 2025-08-08

Wu Y, Li K, Tang L, et al (2024)

A Review of emergency medical services for stroke.

African health sciences, 24(3):382-392.

In the past decade, Emergency Medical Services have been associated with innovations in technology; the 911 telephone system and two-way radio have developed the notification, scheduling, and response processes. The recent twenty years have witnessed the unparalleled innovation changes of the computer framework. These new frameworks in mobile, social, cloud computing or big data concentrations essentially affect the entire society. In the last ten years, major innovation and strategic improvements have occurred, which will affect the concepts and communication methods of Emergency Medical Service in the future. Emergency Medical Service can treat various diseases in the correct way. For example, Emergency Medical Service personnel's early recognition of stroke performance is an important ideal consideration for patients with stroke patients. Pre-stroke screening tools that have been preliminarily evaluated for sensitivity and specificity are necessary to improve detection rates for the pre-court stroke by Emergency Medical Service experts. This is an excellent time for Emergency Medical Service to play a key role in achieving and transcending vision. The motivation behind this article is to provide extensive investigations and unique opportunities for Emergency Medical Service personnel groups to solve how to improve.

RevDate: 2025-08-08
CmpDate: 2025-08-08

Safi A, Shaikh M, Hoang MT, et al (2025)

Decoding Sepsis: A Technical Blueprint for an Algorithm-Driven System Architecture.

Studies in health technology and informatics, 329:1970-1971.

This paper presents a scalable, serverless machine learning operations (ML Ops) architecture for near real-time sepsis detection in Emergency Department (ED) waiting rooms. Built on Amazon Web Services (AWS) cloud environment, the system processes HL7 messages via MuleSoft, using Lambda for data handling, and SageMaker for model deployment. Data is stored in Aurora PostgreSQL and visualized in on-premise Tableau™. With 99.7% of HL7 messages successfully processed, the system shows strong performance, though occasional downtime, code set mismatches, and peak execution times reveal areas for optimization.

RevDate: 2025-08-08
CmpDate: 2025-08-08

Kishimoto K, Sugiyama O, Iwao T, et al (2025)

Integrating Scalable Analytical Tools and Data Warehouses on Private Cloud.

Studies in health technology and informatics, 329:1584-1585.

This study addresses the need for efficient and scalable data warehouse solutions by integrating on-premises environments with private cloud-based infrastructures. Kubernetes was employed to dynamically generate secure virtual machines, offering users independent environments for data analysis. Performance testing demonstrated query speeds, with 240,000 records extracted from a 301GB dataset in 12.4 seconds. Security measures, using a VPN connection between hospital networks and Google Cloud, allowed the safe use of Google's APIs. This scalable infrastructure can accommodate diverse analytical needs.

RevDate: 2025-08-08
CmpDate: 2025-08-08

Leeming G, Hughes J, Joyce D, et al (2025)

Building a Learning Health System-Focused Trusted Research Environment for Mental Health.

Studies in health technology and informatics, 329:174-178.

Trusted Research Environments (TREs) are increasingly used as platforms for secure health data research, but they can also be used for implementing research findings or for action-research (researchers supporting health professionals to solve problems with advanced data analytics). Most TREs have been designed to support analysis of well-structured and coded data, however, with much clinical data recorded as unstructured notes, especially in mental health care, there needs to be a greater variety of tools and data management services available for safe research that includes natural language processing and anonymisation of data sources. The Mental Health Research for Innovation Centre (M-RIC), co-hosted by the University of Liverpool and Mersey Care NHS Foundation Trust, has implemented a novel TRE design that incorporates modern data engineering concepts to improve how researchers access a wider variety of linked data and machine learning tools, to be able to both undertake research and then deploy these tools directly into mental health care.

RevDate: 2025-08-16

Adams MCB, Griffin C, Adams H, et al (2025)

Enhancing Gen3 for clinical trial time series analytics and data discovery: a data commons framework for NIH clinical trials.

Frontiers in digital health, 7:1570009.

This work presents a framework for enhancing Gen3, an open-source data commons platform, with temporal visualization capabilities for clinical trial research. We describe the technical implementation of cloud-native architecture and integrated visualization tools that enable standardized analytics for longitudinal clinical trial data while adhering to FAIR principles. The enhancement includes Kubernetes-based container orchestration, Kibana-based temporal analytics, and automated ETL pipelines for data harmonization. Technical validation demonstrates reliable handling of varied time-based data structures, while maintaining temporal precision and measurement context. The framework's implementation in NIH HEAL Initiative networks studying chronic pain and substance use disorders showcases its utility for real-time monitoring of longitudinal outcomes across multiple trials. This adaptation provides a model for research networks seeking to enhance their data commons capabilities while ensuring findable, accessible, interoperable, and reusable clinical trial data.

RevDate: 2025-08-07
CmpDate: 2025-08-04

Hu Y, Li Y, Cui B, et al (2025)

Internet of things enabled deep learning monitoring system for realtime performance metrics and athlete feedback in college sports.

Scientific reports, 15(1):28405.

This study presents an Internet of Things (IoT)-enabled Deep Learning Monitoring (IoT-E-DLM) model for real-time Athletic Performance (AP) tracking and feedback in collegiate sports. The proposed work integrates advanced wearable sensor technologies with a hybrid neural network combining Temporal Convolutional Networks, Bidirectional Long Short-Term Memory (TCN + BiLSTM) + Attention mechanisms. It is designed to overcome key challenges in processing heterogeneous, high-frequency sensor data and delivering low-latency, sport-specific feedback. The system deployed edge computing for real-time local processing and cloud setup for high-complexity analytics, achieving a balance between responsiveness and accuracy. Extensive research was tested with 147 student-athletes across numerous sports, including track and field, basketball, soccer, and swimming, over 12 months at Shangqiu University. The proposed model achieved a prediction accuracy of 93.45% with an average processing latency of 12.34 ms, outperforming conventional and state-of-the-art approaches. The system also demonstrated efficient resource usage (CPU: 68.34%, GPU: 72.56%), high data capture reliability (98.37%), and precise temporal synchronization. These results confirm the model's effectiveness in enabling real-time performance monitoring and feedback delivery, establishing a robust groundwork for future developments in Artificial Intelligence (AI)-driven sports analytics.

RevDate: 2025-08-03

Lima DB, Ruwolt M, Santos MDM, et al (2025)

Q2C: A software for managing mass spectrometry facilities.

Journal of proteomics, 321:105511 pii:S1874-3919(25)00138-1 [Epub ahead of print].

We present Q2C, an open-source software designed to streamline mass spectrometer queue management and assess performance based on quality control metrics. Q2C provides a fast and user-friendly interface to visualize projects queues, manage analysis schedules and keep track of samples that were already processed. Our software includes analytical tools to ensure equipment calibration and provides comprehensive log documentation for machine maintenance, enhancing operational efficiency and reliability. Additionally, Q2C integrates with Google™ Cloud, allowing users to access and manage the software from different locations while keeping all data synchronized and seamlessly integrated across the system. For multi-user environments, Q2C implements a write-locking mechanism that checks for concurrent operations before saving data. When conflicts are detected, subsequent write requests are automatically queued to prevent data corruption, while the interface continuously refreshes to display the most current information from the cloud storage. Finally, Q2C, a demonstration video, and a user tutorial are freely available for academic use at https://github.com/diogobor/Q2C. Data are available from the ProteomeXchange consortium (identifier PXD055186). SIGNIFICANCE: Q2C addresses a critical gap in mass spectrometry facility management by unifying sample queue management with instrument performance monitoring. It ensures optimal instrument utilization, reduces turnaround times, and enhances data quality by dynamically prioritizing and routing samples based on analysis type and urgency. Unlike existing tools, Q2C integrates queue control and QC in a single platform, maximizing operational efficiency and reliability.

RevDate: 2025-08-06

Lin SY, Wang JQ, Peng SM, et al (2025)

Multihop cost awareness task migration with networking load balance technology for vehicular edge computing.

Scientific reports, 15(1):28126.

6G technology aims to revolutionize the mobile communication industry by revamping the role of vehicular wireless connections. Its network architecture will evolve towards multi-access edge computing (MEC) distributing cloud applications to support inter-vehicle applications such as cooperative driving. As the number of tasks offloaded to MEC servers increases, local MEC servers associated with vehicles may encounter insufficient computing resources for task offloading. This issue can be mitigated if neighboring servers can collaboratively provide computing capabilities to the local server for task migration. This paper investigates dynamic resource allocation and task migration mechanisms for cooperative vehicular edge computing (VEC) servers to expand computing capabilities of local server. Then, the multihop cost awareness task migration (MCATM) mechanism is proposed in this paper, which ensures that tasks can be migrated to the most suitable VEC server when the local server is overloaded. The MCATM mechanism begins by addressing whether the nearest VEC server can handle the computational tasks. We subsequently address the issue of duplicate selection to choose an appropriate VEC server for task migration among n-hop neighboring servers. Next, we focus on finding efficient transmission paths between the local and destination VEC servers to facilitate seamless task migration. The MCATM includes (i) the weight variable analytic hierarchy process (WVAHP) to select a suitable server among multihop cooperative VEC servers for task migration, and (ii) the pre-allocation with cost balance (PACB) path selection algorithm. The simulation results demonstrate that the MCATM enables the migration of computational tasks to appropriate neighboring VEC servers with the aim of increasing the task migration success rate while balancing network traffic and computing server capabilities.

RevDate: 2025-08-03

Eisenbraun B, Ho A, Meyer PA, et al (2025)

Accelerating structural dynamics through integrated research informatics.

Structural dynamics (Melville, N.Y.), 12(4):041101.

Structural dynamics research requires robust computational methods, reliable software, accessible data, and scalable infrastructure. Managing these components is complex and directly affects reproducibility and efficiency. The SBGrid Consortium addresses these challenges through a three-pillar approach that encompasses Software, Data, and Infrastructure, designed to foster a consistent and rigorous computational environment. At the core is the SBGrid software collection (>620 curated applications), supported by the Capsules Software Execution Environment, which ensures conflict-free, version-controlled execution. The SBGrid Data Bank supports open science by enabling the publication of primary experimental data. SBCloud, a fully managed cloud computing platform, provides scalable, on-demand infrastructure optimized for structural biology workloads. Together, they reduce computational friction, enabling researchers to focus on interpreting time-resolved data, modeling structural transitions, and managing large simulation datasets for advancing structural dynamics. This integrated platform delivers a reliable and accessible foundation for computationally intensive research across diverse scientific fields sharing common computational methods.

RevDate: 2025-08-03

Masjoodi S, Anbardar MH, Shokripour M, et al (2025)

Whole Slide Imaging (WSI) in Pathology: Emerging Trends and Future Applications in Clinical Diagnostics, Medical Education, and Pathology.

Iranian journal of pathology, 20(3):257-265.

BACKGROUND & OBJECTIVE: Whole Slide Imaging (WSI) has emerged as a transformative technology in the fields of clinical diagnostics, medical education, and pathology research. By digitizing entire glass slides into high-resolution images, WSI enables advanced remote collaboration, the integration of artificial intelligence (AI) into diagnostic workflows, and facilitates large-scale data sharing for multi-center research.

METHODS: This paper explores the growing applications of WSI, focusing on its impact on diagnostics through telepathology, AI-powered diagnoses and precision medicine, and educational advancements. In this report, we will highlight the profound impact of WSI and address the challenges that must be overcome to enable its broader adoption.

RESULTS & CONCLUSION: Despite its many advantages, challenges such as infrastructure limitations and regulatory issues need to be addressed for broader adoption. The future of WSI lies in its ability to integrate with cloud-based platforms and big data analytics, continuing to drive the digital transformation of pathology.

RevDate: 2025-08-18
CmpDate: 2025-07-31

Beyer D, Delancey E, L McLeod (2025)

Automating Colon Polyp Classification in Digital Pathology by Evaluation of a "Machine Learning as a Service" AI Model: Algorithm Development and Validation Study.

JMIR formative research, 9:e67457.

BACKGROUND: Artificial intelligence (AI) models are increasingly being developed to improve the efficiency of pathological diagnoses. Rapid technological advancements are leading to more widespread availability of AI models that can be used by domain-specific experts (ie, pathologists and medical imaging professionals). This study presents an innovative AI model for the classification of colon polyps, developed using AutoML algorithms that are readily available from cloud-based machine learning platforms. Our aim was to explore if such AutoML algorithms could generate robust machine learning models that are directly applicable to the field of digital pathology.

OBJECTIVE: The objective of this study was to evaluate the effectiveness of AutoML algorithms in generating robust machine learning models for the classification of colon polyps and to assess their potential applicability in digital pathology.

METHODS: Whole-slide images from both public and institutional databases were used to develop a training set for 3 classifications of common entities found in colon polyps: hyperplastic polyps, tubular adenomas, and normal colon. The AI model was developed using an AutoML algorithm from Google's VertexAI platform. A test subset of the data was withheld to assess model accuracy, sensitivity, and specificity.

RESULTS: The AI model displayed a high accuracy rate, identifying tubular adenoma and hyperplastic polyps with 100% success and normal colon with 97% success. Sensitivity and specificity error rates were very low.

CONCLUSIONS: This study demonstrates how accessible AutoML algorithms can readily be used in digital pathology to develop diagnostic AI models using whole-slide images. Such models could be used by pathologists to improve diagnostic efficiency.

RevDate: 2025-08-02

Delogu F, Aspinall C, Ray K, et al (2025)

Breaking barriers: broadening neuroscience education via cloud platforms and course-based undergraduate research.

Frontiers in neuroinformatics, 19:1608900.

This study demonstrates the effectiveness of integrating cloud computing platforms with Course-based Undergraduate Research Experiences (CUREs) to broaden access to neuroscience education. Over four consecutive spring semesters (2021-2024), a total of 42 undergraduate students at Lawrence Technological University participated in computational neuroscience CUREs using brainlife.io, a cloud-computing platform. Students conducted anatomical and functional brain imaging analyses on openly available datasets, testing original hypotheses about brain structure variations. The program evolved from initial data processing to hypothesis-driven research exploring the influence of age, gender, and pathology on brain structures. By combining open science and big data within a user-friendly cloud environment, the CURE model provided hands-on, problem-based learning to students with limited prior knowledge. This approach addressed key limitations of traditional undergraduate research experiences, including scalability, early exposure, and inclusivity. Students consistently worked with MRI datasets, focusing on volumetric analysis of brain structures, and developed scientific communication skills by presenting findings at annual research days. The success of this program demonstrates its potential to democratize neuroscience education, enabling advanced research without extensive laboratory facilities or prior experience, and promoting original undergraduate research using real-world datasets.

RevDate: 2025-08-01

Saghafi S, Kiarashi Y, Rodriguez AD, et al (2025)

Indoor Localization Using Multi-Bluetooth Beacon Deployment in a Sparse Edge Computing Environment.

Digital twins and applications, 2(1):.

Bluetooth low energy (BLE)-based indoor localization has been extensively researched due to its cost-effectiveness, low power consumption, and ubiquity. Despite these advantages, the variability of received signal strength indicator (RSSI) measurements, influenced by physical obstacles, human presence, and electronic interference, poses a significant challenge to accurate localization. In this work, we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency (RF)-dense modern building environment. Through a proof-of-concept study, we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m, whereas additional beacons offer minimal incremental benefit in such settings. Furthermore, our framework for BLE-based localization, implemented on an edge network of Raspberry Pies, has been released under an open-source license, enabling broader application and further research.

RevDate: 2025-08-02
CmpDate: 2025-07-30

Kim MG, Kil BH, Ryu MH, et al (2025)

IoMT Architecture for Fully Automated Point-of-Care Molecular Diagnostic Device.

Sensors (Basel, Switzerland), 25(14):.

The Internet of Medical Things (IoMT) is revolutionizing healthcare by integrating smart diagnostic devices with cloud computing and real-time data analytics. The emergence of infectious diseases, including COVID-19, underscores the need for rapid and decentralized diagnostics to facilitate early intervention. Traditional centralized laboratory testing introduces delays, limiting timely medical responses. While point-of-care molecular diagnostic (POC-MD) systems offer an alternative, challenges remain in cost, accessibility, and network inefficiencies. This study proposes an IoMT-based architecture for fully automated POC-MD devices, leveraging WebSockets for optimized communication, enhancing microfluidic cartridge efficiency, and integrating a hardware-based emulator for real-time validation. The system incorporates DNA extraction and real-time polymerase chain reaction functionalities into modular, networked components, improving flexibility and scalability. Although the system itself has not yet undergone clinical validation, it builds upon the core cartridge and detection architecture of a previously validated cartridge-based platform for Chlamydia trachomatis and Neisseria gonorrhoeae (CT/NG). These pathogens were selected due to their global prevalence, high asymptomatic transmission rates, and clinical importance in reproductive health. In a previous clinical study involving 510 patient specimens, the system demonstrated high concordance with a commercial assay with limits of detection below 10 copies/μL, supporting the feasibility of this architecture for point-of-care molecular diagnostics. By addressing existing limitations, this system establishes a new standard for next-generation diagnostics, ensuring rapid, reliable, and accessible disease detection.

RevDate: 2025-08-02

Dong J, Tian M, Yu J, et al (2025)

DFPS: An Efficient Downsampling Algorithm Designed for the Global Feature Preservation of Large-Scale Point Cloud Data.

Sensors (Basel, Switzerland), 25(14):.

This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature retention of point cloud data with computational efficiency, making it highly adaptable to the growing trend of large-scale 3D point cloud datasets. DFPS is designed with a multithreaded parallel acceleration architecture, which significantly enhances processing speed. Experimental results demonstrate that, for a point cloud dataset containing millions of points, DFPS reduces processing time from approximately 161,665 s using the original FPS method to approximately 71.64 s at a 12.5% sampling rate, achieving an efficiency improvement of over 2200 times. As the sampling rate decreases, the performance advantage becomes more pronounced: at a 3.125% sampling rate, the efficiency improves by nearly 10,000 times. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information. Notably, DFPS does not depend on GPU-based heterogeneous computing, enabling seamless deployment in resource-constrained environments such as airborne and mobile devices, which makes DFPS an effective and lightweighting tool for providing high-quality input data for subsequent algorithms, including point cloud registration and semantic segmentation.

RevDate: 2025-08-01
CmpDate: 2025-07-30

Demieville J, Dilkes B, Eveland AL, et al (2025)

High-resolution phenomics dataset collected on a field-grown, EMS-mutagenized sorghum population evaluated in hot, arid conditions.

BMC research notes, 18(1):332.

OBJECTIVES: The University of Arizona Field Scanner (FS) is capable of generating massive amounts of data from a variety of instruments at high spatial and temporal resolution. The accompanying field infrastructure beneath the system offers capacity for controlled irrigation regimes in a hot, arid environment. Approximately 194 terabytes of raw and processed phenotypic image data were generated over two growing seasons (2020 and 2022) on a population of 434 sequence-indexed, EMS-mutagenized sorghum lines in the genetic background BTx623; the population was grown under well-watered and water-limited conditions. Collectively, these data enable links between genotype and dynamic, drought-responsive phenotypes, which can accelerate crop improvement efforts. However, analysis of these data can be challenging for researchers without background knowledge of the system and preliminary processing.

DATA DESCRIPTION: This dataset contains formatted tabular data generated from sensing system outputs suitable for a wide range of end-users and includes plant-level bounding areas, temperatures, and point cloud characteristics, as well as plot-level photosynthetic parameters and accompanying weather data. The dataset includes approximately 422 megabytes of tabular data totaling 1,903,412 unique unfiltered rows of FS data, 526,917 cleaned rows of FS data, and 285 rows of weather data from the two field seasons.

RevDate: 2025-07-31

Kaneko R, Akaishi S, Ogawa R, et al (2025)

Machine Learning-based Complementary Artificial Intelligence Model for Dermoscopic Diagnosis of Pigmented Skin Lesions in Resource-limited Settings.

Plastic and reconstructive surgery. Global open, 13(7):e7004.

BACKGROUND: Rapid advancements in big data and machine learning have expanded their application in healthcare, introducing sophisticated diagnostics to settings with limited medical resources. Notably, free artificial intelligence (AI) services that require no programming skills are now accessible to healthcare professionals, allowing those in underresourced areas to leverage AI technology. This study aimed to evaluate the potential of these accessible services for diagnosing pigmented skin tumors, underscoring the democratization of advanced medical technologies.

METHODS: In this experimental diagnostic study, we collected 400 dermoscopic images (100 per tumor type) labeled through supervised learning from pathologically confirmed cases. The images were split into training, validation, and testing datasets (8:1:1 ratio) and uploaded to Vertex AI for model training. Supervised learning was performed using the Google Cloud Platform, Vertex AI, based on pathological diagnoses. The model's performance was assessed using confusion matrices and precision-recall curves.

RESULTS: The AI model achieved an average recall rate of 86.3%, precision rate of 87.3%, accuracy of 86.3%, and F1 score of 0.87. Misclassification rates were less than 20% for each category. Accuracy was 80% for malignant melanoma and 100% for both basal cell carcinoma and seborrheic keratosis. Testing on separate cases yielded an accuracy of approximately 70%.

CONCLUSIONS: The metrics obtained in this study suggest that the model can reliably assist in the diagnostic process, even for practitioners without prior AI expertise. The study demonstrated that free AI tools can accurately classify pigmented skin lesions with minimal expertise, potentially providing high-precision diagnostic support in settings lacking dermatologists.

RevDate: 2025-07-31

Zhao M, H Chen (2025)

Identity-Based Provable Data Possession with Designated Verifier from Lattices for Cloud Computing.

Entropy (Basel, Switzerland), 27(7):.

Provable data possession (PDP) is a technique that enables the verification of data integrity in cloud storage without the need to download the data. PDP schemes are generally categorized into public and private verification. Public verification allows third parties to assess the integrity of outsourced data, offering good openness and flexibility, but it may lead to privacy leakage and security risks. In contrast, private verification restricts the auditing capability to the data owner, providing better privacy protection but often resulting in higher verification costs and operational complexity due to limited local resources. Moreover, most existing PDP schemes are based on classical number-theoretic assumptions, making them vulnerable to quantum attacks. To address these challenges, this paper proposes an identity-based PDP with a designated verifier over lattices, utilizing a specially leveled identity-based fully homomorphic signature (IB-FHS) scheme. We provide a formal security proof of the proposed scheme under the small-integer solution (SIS) and learning with errors (LWE) within the random oracle model. Theoretical analysis confirms that the scheme achieves security guarantees while maintaining practical feasibility. Furthermore, simulation-based experiments show that for a 1 MB file and lattice dimension of n = 128, the computation times for core algorithms such as TagGen, GenProof, and CheckProof are approximately 20.76 s, 13.75 s, and 3.33 s, respectively. Compared to existing lattice-based PDP schemes, the proposed scheme introduces additional overhead due to the designated verifier mechanism; however, it achieves a well-balanced optimization among functionality, security, and efficiency.

RevDate: 2025-07-31

Robertson R, Doucet E, Spicer E, et al (2025)

Simon's Algorithm in the NISQ Cloud.

Entropy (Basel, Switzerland), 27(7):.

Simon's algorithm was one of the first to demonstrate a genuine quantum advantage in solving a problem. The algorithm, however, assumes access to fault-tolerant qubits. In our work, we use Simon's algorithm to benchmark the error rates of devices currently available in the "quantum cloud". As a main result, we objectively compare the different physical platforms made available by IBM and IonQ. Our study highlights the importance of understanding the device architectures and topologies when transpiling quantum algorithms onto hardware. For instance, we demonstrate that two-qubit operations on spatially separated qubits on superconducting chips should be avoided.

RevDate: 2025-08-01

Xue K, Jin X, Y Li (2025)

Exploring the Influence of Human-Computer Interaction Experience on Tourist Loyalty in the Context of Smart Tourism: A Case Study of Suzhou Museum.

Behavioral sciences (Basel, Switzerland), 15(7):.

As digital technology evolves rapidly, smart tourism has become a significant trend in the modernization of the industry, relying on advanced tools like big data and cloud computing to improve travelers' experiences. Despite the growing use of human-computer interaction in museums, there remains a lack of in-depth academic investigation into its impact on visitors' behavioral intentions regarding museum engagement. This paper employs Cognitive Appraisal Theory, considers human-computer interaction experience as the independent variable, and introduces destination image and satisfaction as mediators to examine their impact on destination loyalty. Based on a survey of 537 participants, the research shows that human-computer interaction experience has a significant positive impact on destination image, satisfaction, and loyalty. Destination image and satisfaction play a partial and sequential mediating role in this relationship. This paper explores the influence mechanism of human-computer interaction experience on destination loyalty and proposes practical interactive solutions for museums, aiming to offer insights for smart tourism research and practice.

RevDate: 2025-07-31

He J, Ye Q, Yang Z, et al (2025)

A compact public key encryption with equality test for lattice in cloud computing.

Scientific reports, 15(1):27426 pii:10.1038/s41598-025-12018-2.

The rapid proliferation of cloud computing enables users to access computing resources and storage space over the internet, but it also presents challenges in terms of security and privacy. Ensuring the security and availability of data has become a focal point of current research when utilizing cloud computing for resource sharing, data storage, and querying. Public key encryption with equality test (PKEET) can perform an equality test on ciphertexts without decrypting them, even when those ciphertexts are encrypted under different public keys. That offers a practical approach to dividing up or searching for encrypted information directly. In order to deal with the threat raised by the rapid development of quantum computing, researchers have proposed post-quantum cryptography to guarantee the security of cloud services. However, it is challenging to implement these techniques efficiently. In this paper, a compact PKEET scheme is pro-posed. The new scheme does not encrypt the plaintext's hash value immediately but embeds it into the test trapdoor. We also demon-strated that our new construction is one-way secure under the quantum security model. With those efforts, our scheme can withstand the chosen ciphertext attacks as long as the learning with errors (LWE) assumption holds. Furthermore, we evaluated the new scheme's performance and found that it only costs approximately half the storage space compared with previous schemes. There is an almost half reduction in the computing cost throughout the encryption and decryption stages. In a nutshell, the new PKEET scheme is less costly, more compact, and applicable to cloud computing scenarios in a post-quantum environment.

RevDate: 2025-07-31

Chen R, Lin M, Chen J, et al (2025)

Reproducibility assessment of magnetic resonance spectroscopy of pregenual anterior cingulate cortex across sessions and vendors via the cloud computing platform CloudBrain-MRS.

NeuroImage, 318:121400 pii:S1053-8119(25)00403-3 [Epub ahead of print].

Proton magnetic resonance spectroscopy ([1]H-MRS) has potential in clinical diagnosis and understanding the mechanism of illnesses. However, its application is limited by the lack of standardization in data acquisition and processing across time points and between different magnetic resonance imaging (MRI) system vendors. This study examines whether metabolite concentrations obtained from different sessions, scanner models, and vendors can be reliably reproduced and combined for diagnostic analysis-an important consideration for rare disease research. Participants underwent magnetic resonance scanning once on two separate days within one week (one session per day, each including two [1]H-MRS scans without subject movement) on each machine. Absolute metabolite concentrations were analyzed for reliability of within- and between- session using the coefficient of variation (CV), intraclass correlation coefficient (ICC) and Bland-Altman (BA) plot, and for reproducibility across the machines using the Pearson correlation coefficient. As for within- and between- session, most of the CV values for a group of all the first or second scans of a session, and from each session were below 20 %, and most of ICCs ranged from moderate (0.4≤ICC<0.59) to excellent (ICC≥0.75), which indicated high reliability. Most of the BA plots had the line of equality between 95 % confidence interval of bias (mean difference), therefore the differences over scanning time could be negligible. Majority of the Pearson correlation coefficients approached 1 with statistical significance (P < 0.001), showing high reproducibility across the three scanners. Additionally, the intra-vendor reproducibility was greater than the inter-vendor ones.

RevDate: 2025-07-31

Christy C, Nirmala A, Teena AMO, et al (2025)

Machine learning based multi-stage intrusion detection system and feature selection ensemble security in cloud assisted vehicular ad hoc networks.

Scientific reports, 15(1):27058.

The development of intelligent transportation systems relies heavily on Cloud-assisted Vehicular Ad Hoc Networks (VANETs); hence, these networks must be protected. Particularly susceptible to a broad range of assaults are VANETs because of their extreme dynamism and decentralization. Connected vehicles' safety and efficiency could be compromised if these security threats materialize, leading to disastrous road accidents. Solving these issues will require an advanced Intrusion Detection System (IDS) with real-time threat recognition and neutralization capabilities. A new method for improving VANET security, a multi-stage Lightweight IntrusionDetection System Using Random Forest Algorithms (MLIDS-RFA), focuses on feature selection and ensemble models based on machine learning (ML). A multi-step approach is employed by the proposed system, with each stage dedicated to accurately detecting specific types of attacks. Regarding feature selection, MLIDS-RFA uses machine-learning approaches to enhance the detection process. The outcome is a reduction in the amount of processing overhead and a shortening of the response times. The detection abilities of ensemble models are enhanced by integrating the strengths of the Random Forest algorithm (RFA), which safeguards against intricate dangers. The practicality of the proposed technology is demonstrated by conducting thorough simulation analyses. This research demonstrates that the system can reduce false positives while maintaining high detection rates. This research ensures next-generation transport networks' secure and reliable functioning and prepares the path for VANET protection upgrades. MLIDS-RFA has improved detection accuracy (96.2%) and computing efficiency (94.8%) for dynamic VANET management. It operates well with large networks (97.8%) and adapts well to network changes (93.8%). The comprehensive methodology ensures high detection performance (95.9%) and VANET security by balancing accuracy, efficiency, and scalability.

RevDate: 2025-07-31

Punitha S, KS Preetha (2025)

Enhancing reliability and security in cloud-based telesurgery systems leveraging swarm-evoked distributed federated learning framework to mitigate multiple attacks.

Scientific reports, 15(1):27226.

Advances in robotic surgery are being driven by the convergence of technologies such as artificial intelligence (AI), 5G/6G wireless communication, the Internet of Things (IoT), and edge computing, enhancing clinical precision, speed, and real-time decision-making. However, the practical deployment of telesurgery and tele-mentoring remains constrained due to increasing cybersecurity threats, posing significant challenges to patient safety and system reliability. To address these issues, a distributed framework based on federated learning is proposed, integrating Optimized Gated Transformer Networks (OGTN) with layered chaotic encryption schemes to mitigate multiple unknown cyberattacks while preserving data privacy and integrity. The framework was implemented using TensorFlow Federated Learning Libraries (FLL) and evaluated on the UNSW-NB15 dataset. Performance was assessed using metrics including precision, accuracy, F1-score, recall, and security strength, and compared with existing approaches. In addition, structured and unstructured security assessments, including evaluations based on National Institute of Standards and Technology (NIST) recommendations, were performed to validate robustness. The proposed framework demonstrated superior performance in terms of diagnostic accuracy and cybersecurity resilience relative to conventional models. These results suggest that the framework is a viable candidate for integration into teleoperated healthcare systems, offering improved security and operational efficiency in robotic surgery applications.

RevDate: 2025-08-07

Baker J, Stricker E, Coleman J, et al (2025)

Implementing a training resource for large-scale genomic data analysis in the All of Us Researcher Workbench.

American journal of human genetics [Epub ahead of print].

A lack of representation in genomic research and limited access to computational training create barriers for many researchers seeking to analyze large-scale genetic datasets. The All of Us Research Program provides an unprecedented opportunity to address these gaps by offering genomic data from a broad range of participants, but its impact depends on equipping researchers with the necessary skills to use it effectively. The All of Us Biomedical Researcher (BR) Scholars Program at Baylor College of Medicine aims to break down these barriers by providing early-career researchers with hands-on training in computational genomics through the All of Us Evenings with Genetics Research Program. The year-long program begins with the faculty summit, an in-person computational boot camp that introduces scholars to foundational skills for using the All of Us dataset via a cloud-based research environment. The genomics tutorials focus on genome-wide association studies (GWASs), utilizing Jupyter Notebooks and the Hail computing framework to provide an accessible and scalable approach to large-scale data analysis. Scholars engage in hands-on exercises covering data preparation, quality control, association testing, and result interpretation. By the end of the summit, participants will have successfully conducted a GWAS, visualized key findings, and gained confidence in computational resource management. This initiative expands access to genomic research by equipping early-career researchers from a variety of backgrounds with the tools and knowledge to analyze All of Us data. By lowering barriers to entry and promoting the study of representative populations, the program fosters innovation in precision medicine and advances equity in genomic research.

RevDate: 2025-07-24

Dal I, HB Kaya (2025)

Multidisciplinary Evaluation of an AI-Based Pneumothorax Detection Model: Clinical Comparison with Physicians in Edge and Cloud Environments.

Journal of multidisciplinary healthcare, 18:4099-4111.

BACKGROUND: Accurate and timely detection of pneumothorax on chest radiographs is critical in emergency and critical care settings. While subtle cases remain challenging for clinicians, artificial intelligence (AI) offers promise as a diagnostic aid. This retrospective diagnostic accuracy study evaluates a deep learning model developed using Google Cloud Vertex AI for pneumothorax detection on chest X-rays.

METHODS: A total of 152 anonymized frontal chest radiographs (76 pneumothorax, 76 normal), confirmed by computed tomography (CT), were collected from a single center between 2023 and 2024. The median patient age was 50 years (range: 18-95), with 67.1% male. The AI model was trained using AutoML Vision and evaluated in both cloud and edge deployment environments. Diagnostic accuracy metrics-including sensitivity, specificity, and F1 score-were compared with those of 15 physicians from four specialties (general practice, emergency medicine, thoracic surgery, radiology), stratified by experience level. Subgroup analysis focused on minimal pneumothorax cases. Confidence intervals were calculated using the Wilson method.

RESULTS: In cloud deployment, the AI model achieved an overall diagnostic accuracy of 0.95 (95% CI: 0.83, 0.99), sensitivity of 1.00 (95% CI: 0.83, 1.00), specificity of 0.89 (95% CI: 0.69, 0.97), and F1 score of 0.95 (95% CI: 0.86, 1.00). Comparable performance was observed in edge mode. The model outperformed junior clinicians and matched or exceeded senior physicians, particularly in detecting minimal pneumothoraces, where AI sensitivity reached 0.93 (95% CI: 0.79, 0.97) compared to 0.55 (95% CI: 0.38, 0.69) - 0.84 (95% CI: 0.69, 0.92) among human readers.

CONCLUSION: The Google Cloud Vertex AI model demonstrates high diagnostic performance for pneumothorax detection, including subtle cases. Its consistent accuracy across edge and cloud settings supports its integration as a second reader or triage tool in diverse clinical workflows, especially in acute care or resource-limited environments.

RevDate: 2025-07-21

Onur D, Ç Özbakır (2025)

Pediatrics 4.0: the Transformative Impacts of the Latest Industrial Revolution on Pediatrics.

Health care analysis : HCA : journal of health philosophy and policy [Epub ahead of print].

Industry 4.0 represents the latest phase of industrial evolution, characterized by the seamless integration of cyber-physical systems, the Internet of Things, big data analytics, artificial intelligence, advanced robotics, and cloud computing, enabling smart, adaptive, and interconnected processes where physical, digital, and biological realms converge. In parallel, healthcare has progressed from the traditional, physician-centered model of Healthcare 1.0 by introducing medical devices and digitized records to Healthcare 4.0, which leverages Industry 4.0 technologies to create personalized, data-driven, and patient-centric systems. In this context, we hereby introduce Pediatrics 4.0 as a new paradigm that adapts these innovations to children's unique developmental, physiological, and ethical considerations and aims to improve diagnostic precision, treatment personalization, and continuous monitoring in pediatric populations. Key applications include AI-driven diagnostic and predictive analytics, IoT-enabled remote monitoring, big data-powered epidemiological insights, robotic assistance in surgery and rehabilitation, and 3D printing for patient-specific devices and pharmaceuticals. However, realizing Pediatrics 4.0 requires addressing significant challenges-data privacy and security, algorithmic bias, interoperability and standardization, equitable access, regulatory alignment, the ethical complexities of consent, and long-term technology exposure. Future research should focus on explainable AI, pediatric-specific device design, robust data governance frameworks, dynamic ethical and legal guidelines, interdisciplinary collaboration, and workforce training to ensure these transformative technologies translate into safer, more effective, and more equitable child healthcare.

RevDate: 2025-07-21

Parashar B, Malviya R, Sridhar SB, et al (2025)

IoT-enabled medical advances shaping the future of orthopaedic surgery and rehabilitation.

Journal of clinical orthopaedics and trauma, 68:103113.

The Internet of Things (IoT) connects smart devices to enable automation and data exchange. IoT is rapidly transforming the healthcare industry. Understanding of the framework and challenges of IoT is essential for effective implementation. This review explores the advances in IoT technology in orthopaedic surgery and rehabilitation. A comprehensive literature search was conducted by the author using databases such as PubMed, Scopus, and Google Scholar. Relevant peer-reviewed articles published between 2010 and 2024 were preferred based on their focus on IoT applications in orthopaedic surgery, rehabilitation, and assistive technologies. Keywords including "Internet of Things," "orthopaedic rehabilitation," "wearable sensors," and "smart health monitoring" were used. Studies were analysed to identify current trends, clinical relevance, and future opportunities in IoT-driven orthopaedic care. The reviewed studies demonstrate that IoT technologies, such as wearable motion sensors, smart implants, real-time rehabilitation platforms, and AI-powered analytics, have significantly improved orthopaedic surgical outcomes and patient recovery. These systems enable continuous monitoring, early complication detection, and adaptive rehabilitation. However, challenges persist in data security, device interoperability, user compliance, and standardisation across platforms. IoT holds great promise in enhancing orthopaedic surgery and rehabilitation by enabling real-time monitoring and personalised care. Moving forward, clinical validation, user-friendly designs, and strong data security will be key to its successful integration in routine practice.

RevDate: 2025-07-21

Gomase VS, Ghatule AP, Sharma R, et al (2025)

Cloud Computing Facilitating Data Storage, Collaboration, and Analysis in Global Healthcare Clinical Trials.

Reviews on recent clinical trials pii:RRCT-EPUB-149483 [Epub ahead of print].

INTRODUCTION: Healthcare data management, especially in the context of clinical trials, has been completely transformed by cloud computing. It makes it easier to store data, collaborate in real time, and perform advanced analytics across international research networks by providing scalable, secure, and affordable solutions. This paper explores how cloud computing is revolutionizing clinical trials, tackling issues including data integration, accessibility, and regulatory compliance.

MATERIALS AND METHODS: Key factors assessed include cloud platform-enabled analytical tools, collaborative features, and data storage capacity. To ensure the safe management of sensitive healthcare data, adherence to laws like GDPR and HIPAA was emphasized.

RESULTS: Real-time updates and integration of multicenter trial data were made possible by cloud systems, which also showed notable gains in collaborative workflows and data sharing. High scalability storage options reduced infrastructure expenses while upholding security requirements. Rapid interpretation of complicated datasets was made possible by sophisticated analytical tools driven by machine learning and artificial intelligence, which expedited decision-making. Improved patient recruitment tactics and flexible trial designs are noteworthy examples.

CONCLUSION: Cloud computing has become essential for international clinical trials because it provides unmatched efficiency in data analysis, communication, and storage. It is a pillar of contemporary healthcare research due to its capacity to guarantee data security and regulatory compliance as well as its creative analytical capabilities. Subsequent research ought to concentrate on further refining cloud solutions to tackle new issues and utilizing their complete capabilities in clinical trial administration.

RevDate: 2025-07-23

Yang X, Yao K, Li S, et al (2025)

A smart grid data sharing scheme supporting policy update and traceability.

Scientific reports, 15(1):26343 pii:10.1038/s41598-025-10704-9.

To address the problems of centralized attribute authority, inefficient encryption and invalid access control strategy in the data sharing scheme based on attribute-based encryption technology, a smart grid data sharing scheme that supports policy update and traceability is proposed. The smart contract of the blockchain is used to generate the user's key, which does not require a centralized attribute authority. Combined with attribute-based encryption and symmetric encryption technology, the confidentiality of smart grid data is protected and flexible data access control is achieved. In addition, online/offline encryption and outsourced computing technologies complete most of the computing tasks in the offline stage or cloud server, which greatly reduces the computing burden of data owners and data access users. By introducing the access control policy update mechanism, the data owner can flexibly modify the key ciphertext stored in the cloud server. Finally, the analysis results show that this scheme can protect the privacy of smart grid data, verify the integrity of smart grid data, resist collusion attacks and track the identity of malicious users who leak private keys, and its efficiency is better than similar data sharing schemes.

RevDate: 2025-07-23

Yin X, Zhang X, Pei L, et al (2025)

Optimization and benefit evaluation model of a cloud computing-based platform for power enterprises.

Scientific reports, 15(1):26366.

To address the challenges associated with the digital transformation of the power industry, this research develops an optimization and benefit evaluation model for cloud computing platforms tailored to power enterprises. It responds to the current lack of systematic optimization mechanisms and evaluation methods in existing cloud computing applications. The proposed model focuses on resource scheduling optimization, task load balancing, and improvements in computational efficiency. A multidimensional optimization framework is constructed, integrating key parameters such as path planning, condition coefficient computation, and the regulation of task and average loads. The model employs an improved lightweight genetic algorithm combined with an elastic resource allocation strategy to dynamically adapt to task changes across various operational scenarios. Experimental results indicate a 46% reduction in failure recovery time, a 78% improvement in high-load throughput capacity, and an average increase of nearly 60% in resource utilization. Compared with traditional on-premise architectures and static scheduling models, the proposed approach offers notable advantages in computational response time and fault tolerance. In addition, through containerized deployment and intelligent orchestration, it achieves a 43% reduction in monthly operating costs. A multi-level benefit evaluation system-spanning power generation, grid operations, and end-user services-is established, integrating historical data, expert weighting, and dynamic optimization algorithms to enable quantitative performance assessment and decision support. In contrast to existing studies that mainly address isolated functional modules such as equipment health monitoring or collaborative design, this research presents a novel paradigm characterized by architectural integration, methodological versatility, and industrial applicability. It thus addresses the empirical gap in multi-objective optimization for industrial-scale power systems. The theoretical contribution of this research lies in the establishment of a highly scalable and integrated framework for optimization and evaluation. Its practical significance is reflected in the notable improvements in operational efficiency and cost control in real-world applications. The proposed model provides a clear trajectory and quantitative foundation for promoting an efficient and intelligent cloud computing ecosystem in the power sector.

RevDate: 2025-07-22

Cao J, Yu Z, Zhu B, et al (2025)

Construction and efficiency analysis of an embedded system-based verification platform for edge computing.

Scientific reports, 15(1):26114.

With the profound convergence and advancement of the Internet of Things, big data analytics, and artificial intelligence technologies, edge computing-a novel computing paradigm-has garnered significant attention. While edge computing simulation platforms offer convenience for simulations and tests, the disparity between them and real-world environments remains a notable concern. These platforms often struggle to precisely mimic the interactive behaviors and physical attributes of actual devices. Moreover, they face constraints in real-time responsiveness and scalability, thus limiting their ability to truly reflect practical application scenarios. To address these obstacles, our study introduces an innovative physical verification platform for edge computing, grounded in embedded devices. This platform seamlessly integrates KubeEdge and Serverless technological frameworks, facilitating dynamic resource allocation and efficient utilization. Additionally, by leveraging the robust infrastructure and cloud services provided by Alibaba Cloud, we have significantly bolstered the system's stability and scalability. To ensure a comprehensive assessment of our architecture's performance, we have established a realistic edge computing testing environment, utilizing embedded devices like Raspberry Pi. Through rigorous experimental validations involving offloading strategies, we have observed impressive outcomes. The refined offloading approach exhibits outstanding results in critical metrics, including latency, energy consumption, and load balancing. This not only underscores the soundness and reliability of our platform design but also illustrates its versatility for deployment in a broad spectrum of application contexts.

RevDate: 2025-07-20

C BS, St B, S S (2025)

Achieving cloud resource optimization with trust-based access control: A novel ML strategy for enhanced performance.

MethodsX, 15:103461.

Cloud computing continues to rise, increasing the demand for more intelligent, rapid, and secure resource management. This paper presents AdaPCA-a novel method that integrates the adaptive capabilities of AdaBoost with the dimensionality-reduction efficacy of PCA. What is the objective? Enhance trust-based access control and resource allocation decisions while maintaining a minimal computational burden. High-dimensional trust data frequently hampers systems; however, AdaPCA mitigates this issue by identifying essential aspects and enhancing learning efficacy concurrently. To evaluate its performance, we conducted a series of simulations comparing it with established methods such as Decision Trees, Random Forests, and Gradient Boosting. We assessed execution time, resource use, latency, and trust accuracy. Results show that AdaPCA achieved a trust score prediction accuracy of 99.8 %, a resource utilization efficiency of 95 %, and reduced allocation time to 140 ms, outperforming the benchmark models across all evaluated parameters. AdaPCA had superior performance overall-expedited decision-making, optimized resource utilization, reduced latency, and the highest accuracy in trust evaluation among the evaluated models. AdaPCA is not merely another model; it represents a significant advancement towards more intelligent and safe cloud systems designed for the future.•Introduces AdaPCA, a novel hybrid approach that integrates AdaBoost with PCA to optimize cloud resource allocation and improve trust-based access control.•Outperforms conventional techniques such as Decision Tree, Random Forest, and Gradient Boosting by attaining superior trust accuracy, expedited execution, enhanced resource utilization, and reduced latency.•Presents an intelligent, scalable, and adaptable architecture for secure and efficient management of cloud resources, substantiated by extensive simulation experiments.

RevDate: 2025-07-18
CmpDate: 2025-07-18

Zhao N, Wang B, Wang ZH, et al (2025)

[Spatiotemporal Evolution of Ecological Environment Quality and Ecological Management Zoning in Inner Mongolia Based on RSEI].

Huan jing ke xue= Huanjing kexue, 46(7):4499-4509.

Inner Mongolia serves as a crucial ecological security barrier for northern China. Examining the spatial and temporal evolution of ecological environment quality, along with the zoning for ecological management, is crucial for enhancing the management and development of ecological environments. Based on the Google Earth Engine cloud platform, four indicators-heat, greenness, dryness, and wetness-were extracted from MODIS remote sensing image data spanning 2000 to 2023. The remote sensing ecological index (RESI) model was constructed using principal component analysis. By combining the coefficient of variation (CV), Sen + Mann-Kendall, and Hurst indices, the spatial and temporal variations and future trends of ecological environmental quality of the Inner Mongolia were analyzed. The influencing mechanisms were explored using a geographical detector, and the quadrant method was employed for ecological management zoning based on the intensity of human activities and the quality of the ecological environment. The results indicated that: ① The ecological environment quality of Inner Mongolia from 2000 to 2023 was mainly characterized as poor to average, with a spatial trend of decreasing quality from east to west. From 2000 to 2005, Inner Mongolia experienced environmental degradation, followed by a gradual improvement in ecological environment quality. ② Inner Mongolia exhibited the largest area of non-significantly improved and non-significantly degraded regions, and the overall environmental quality was more stable. However, ecosystems in the western region were more fragile and prone to fluctuations. The area of sustained degradation versus sustained improvement in the future trend of change was larger, and the western region is expected to be the main area of improvement in the future. ③ The results of single-factor detection showed that the influences on RSEI values were, in descending order, precipitation, soil type, land use type, air temperature, vegetation type, elevation, population density, GDP, and nighttime lighting; the interactions among driving factors on RSEI changes showed a bivariate or nonlinear enhancement, which suggests that the interactions of each driving factor could improve the explanatory power of spatial variations in ecological environment quality. ④ Based on the coupling of human activity intensity and ecological environment quality, the 12 league cities of Inner Mongolia were divided into ecological development coordination zones, ecological development reserves, and ecological development risk zones. This study can provide a scientific basis for ecological environmental protection and sustainable development in Inner Mongolia.

RevDate: 2025-07-18

Yang M, Liu EQ, Yang Y, et al (2025)

[Quantitative Analysis of Wetland Evolution Characteristics and Driving Factors in Ruoergai Plateau Based on Landsat Time Series Remote Sensing Images].

Huan jing ke xue= Huanjing kexue, 46(7):4461-4472.

The Ruoergai Wetland, China's largest high-altitude marsh, plays a crucial role in the carbon cycle and climate management. However, the Ruoergai Wetland has experienced significant damage as a result of human activity and global warming. Based on the Google Earth Engine (GEE) cloud platform and time-series Landsat images, a random forest algorithm was applied to produce a detailed classification map of the Ruoergai wetlands from 1990 to 2020. Through the transfer matrix and landscape pattern index, the spatiotemporal change law and change trend of wetlands were analyzed. Then, the influencing factors of wetland distribution were quantitatively analyzed using geographic detector. The results showed that: ① The total wetland area averaged 3 910 km[2] from 1990 to 2020, dominated by marshy and wet meadows, accounting for 83.13% of the total wetland area. From 1990 to 2010, the wetland area of Ruoergai showed a decreasing trend, and from 2010 to 2020, the wetland area increased slightly. ② From 1990 to 2020, the decrease in wetland area was mainly reflected in the degradation of wet meadows into alpine grassland. There were also changes among different wetland types, which were mainly reflected in the conversion of marsh meadows and wet meadows. ③ From 1990 to 2010, the wetland landscape tended to be fragmented and complicated, and the aggregation degree decreased. From 2010 to 2020, wetland fragmentation decreased, and the wetland landscape became more concentrated. ④ Slope, temperature, and aspect were the main natural factors affecting wetland distribution. At the same time, population density has gradually become a significant social and economic factor affecting wetland distribution. The results can provide scientific support for the wetland protection planning of Ruoergai and serve for the ecological preservation and high-level development of the area.

RevDate: 2025-07-18
CmpDate: 2025-07-16

Narasimha Raju AS, Venkatesh K, Rajababu M, et al (2025)

Colorectal cancer unmasked: A synergistic AI framework for Hyper-granular image dissection, precision segmentation, and automated diagnosis.

BMC medical imaging, 25(1):283.

Colorectal cancer (CRC) is the second most common cause of cancer-related mortality worldwide, underscoring the necessity for computer-aided diagnosis (CADx) systems that are interpretable, accurate, and robust. This study presents a practical CADx system that combines Vision Transformers (ViTs) and DeepLabV3 + to accurately identify and segment colorectal lesions in colonoscopy images.The system addresses class balance and real-world complexity with PCA-based dimensionality reduction, data augmentation, and strategic preprocessing using recently curated CKHK-22 dataset comprising more than 14,000 annotated images of CVC-ClinicDB, Kvasir-2, and Hyper-Kvasir. ViT, ResNet-50, DenseNet-201, and VGG-16 were used to quantify classification performance. ViT achieved best-in-class accuracy (97%), F1-score (0.95), and AUC (92%) in test data. The DeepLabV3 + achieved segmentation state-of-the-art for tasks of localisation with 0.88 Dice Coefficient and 0.71 Intersection over Union (IoU), ensuring sharp delineation of areas that are malignant. The CADx system accommodates real-time inference and served through Google Cloud for information that accommodates scalable clinical implementation. The image-level segmentation effectiveness is evidenced by comparison with visual overlay and expert-manually deliminated masks, and its precision is illustrated by computation of precision, recall, F1-score, and AUC. The hybrid strategy not only outperforms traditional CNN strategies but also overcomes important clinical needs such as detection early, balance of highly disparate classes, and clear explanation. The proposed ViT-DeepLabV3 + system establishes a basis for advanced AI support to colorectal diagnosis by utilizing self-attention strategies and learning with different scales of context. The system offers a high-capacity, reproducible computerised colorectal cancer screening and monitoring solution and can be best deployed where resources are scarce, and it can be highly desirable for clinical deployment.

RevDate: 2025-07-18
CmpDate: 2025-07-16

Islam U, Alatawi MN, Alqazzaz A, et al (2025)

A hybrid fog-edge computing architecture for real-time health monitoring in IoMT systems with optimized latency and threat resilience.

Scientific reports, 15(1):25655.

The advancement of the Internet of Medical Things (IoMT) has transformed healthcare delivery by enabling real-time health monitoring. However, it introduces critical challenges related to latency and, more importantly, the secure handling of sensitive patient data. Traditional cloud-based architectures often struggle with latency and data protection, making them inefficient for real-time healthcare scenarios. To address these challenges, we propose a Hybrid Fog-Edge Computing Architecture tailored for effective real-time health monitoring in IoMT systems. Fog computing enables processing of time-critical data closer to the data source, reducing response time and relieving cloud system overload. Simultaneously, edge computing nodes handle data preprocessing and transmit only valuable information-defined as abnormal or high-risk health signals such as irregular heart rate or oxygen levels-using rule-based filtering, statistical thresholds, and lightweight machine learning models like Decision Trees and One-Class SVMs. This selective transmission optimizes bandwidth without compromising response quality. The architecture integrates robust security measures, including end-to-end encryption and distributed authentication, to counter rising data breaches and unauthorized access in IoMT networks. Real-life case scenarios and simulations are used to validate the model, evaluating latency reduction, data consolidation, and scalability. Results demonstrate that the proposed architecture significantly outperforms cloud-only models, with a 70% latency reduction, 30% improvement in energy efficiency, and 60% bandwidth savings. Additionally, the time required for threat detection was halved, ensuring faster response to security incidents. This framework offers a flexible, secure, and efficient solution ideal for time-sensitive healthcare applications such as remote patient monitoring and emergency response systems.

RevDate: 2025-07-18

Khaldy MAA, Nabot A, Al-Qerem A, et al (2025)

Adaptive conflict resolution for IoT transactions: A reinforcement learning-based hybrid validation protocol.

Scientific reports, 15(1):25589.

This paper introduces a novel Reinforcement Learning-Based Hybrid Validation Protocol (RL-CC) that revolutionizes conflict resolution for time-sensitive IoT transactions through adaptive edge-cloud coordination. Efficient transaction management in sensor-based systems is crucial for maintaining data integrity and ensuring timely execution within the constraints of temporal validity. Our key innovation lies in dynamically learning optimal scheduling policies that minimize transaction aborts while maximizing throughput under varying workload conditions. The protocol consists of two validation phases: an edge validation phase, where transactions undergo preliminary conflict detection and prioritization based on their temporal constraints, and a cloud validation phase, where a final conflict resolution mechanism ensures transactional correctness on a global scale. The RL-based mechanism continuously adapts decision-making by learning from system states, prioritizing transactions, and dynamically resolving conflicts using a reward function that accounts for key performance parameters, including the number of conflicting transactions, cost of aborting transactions, temporal validity constraints, and system resource utilization. Experimental results demonstrate that our RL-CC protocol achieves a 90% reduction in transaction abort rates (5% vs. 45% for 2PL), 3x higher throughput (300 TPS vs. 100 TPS), and 70% lower latency compared to traditional concurrency control methods. The proposed RL-CC protocol significantly reduces transaction abort rates, enhances concurrency management, and improves the efficiency of sensor data processing by ensuring that transactions are executed within their temporal validity window. The results suggest that the RL-based approach offers a scalable and adaptive solution for sensor-based applications requiring high-concurrency transaction processing, such as Internet of Things (IoT) networks, real-time monitoring systems, and cyber-physical infrastructures.

RevDate: 2025-07-17

Contaldo SG, d'Acierno A, Bosio L, et al (2025)

Long-read microbial genome assembly, gene prediction and functional annotation: a service of the MIRRI ERIC Italian node.

Frontiers in bioinformatics, 5:1632189.

BACKGROUND: Understanding the structure and function of microbial genomes is crucial for uncovering their ecological roles, evolutionary trajectories, and potential applications in health, biotechnology, agriculture, food production, and environmental science. However, genome reconstruction and annotation remain computationally demanding and technically complex.

RESULTS: We introduce a bioinformatics platform designed explicitly for long-read microbial sequencing data to address these challenges. Developed as a service of the Italian MIRRI ERIC node, the platform provides a comprehensive solution for analyzing both prokaryotic and eukaryotic genomes, from assembly to functional protein annotation. It integrates state-of-the-art tools (e.g., Canu, Flye, BRAKER3, Prokka, InterProScan) within a reproducible, scalable workflow built on the Common Workflow Language and accelerated through high-performance computing infrastructure. A user-friendly web interface ensures accessibility, even for non-specialists.

CONCLUSION: Through case studies involving three environmentally and clinically significant microorganisms, we demonstrate the ability of the platform to produce reliable, biologically meaningful insights, positioning it as a valuable tool for routine genome analysis and advanced microbial research.

RevDate: 2025-07-17

Georgiou D, Katsaounis S, Tsanakas P, et al (2025)

Towards a secure cloud repository architecture for the continuous monitoring of patients with mental disorders.

Frontiers in digital health, 7:1567702.

INTRODUCTION: Advances in Information Technology are transforming healthcare systems, with a focus on improving accessibility, efficiency, resilience, and service quality. Wearable devices such as smartwatches and mental health trackers enable continuous biometric data collection, offering significant potential to enhance chronic disorder treatment and overall healthcare quality. However, these technologies introduce critical security and privacy risks, as they handle sensitive patient data.

METHODS: To address these challenges, this paper proposes a security-by-design cloud-based architecture that leverages wearable body sensors for continuous patient monitoring and mental disorder prediction. The system integrates an Elasticsearch-powered backend to manage biometric data securely. A dedicated framework was developed to ensure confidentiality, integrity, and availability (CIA) of patient data through secure communication protocols and privacy-preserving mechanisms.

RESULTS: The proposed architecture successfully enables secure real-time biometric monitoring and data processing from wearable devices. The system is designed to operate 24/7, ensuring robust performance in continuously tracking both mental and physiological health indicators. The inclusion of Elasticsearch provides scalable and efficient data indexing and retrieval, supporting timely healthcare decisions.

DISCUSSION: This work addresses key security and privacy challenges inherent in continuous biometric data collection. By incorporating a security-by-design approach, the proposed framework enhances trustworthiness in healthcare monitoring technologies. The solution demonstrates the feasibility of balancing real-time health monitoring needs with stringent data protection requirements.

RevDate: 2025-07-16

Owuor CD, Tesfaye B, Wakem AYD, et al (2025)

Visualization of the Evolution and Transmission of Circulating Vaccine-Derived Poliovirus (cVDPV) Outbreaks in the African Region.

Bio-protocol, 15(13):e5376.

Since the creation of the Global Polio Eradication Initiative (GPEI) in 1988, significant progress has been made toward attaining a poliovirus-free world. This has resulted in the eradication of wild poliovirus (WPV) serotypes two (WPV2) and three (WPV3) and limited transmission of serotype one (WPV1) in Pakistan and Afghanistan. However, the increased emergence of circulating vaccine-derived poliovirus (cVDPV) and the continued circulation of WPV1, although limited to two countries, pose a continuous threat of international spread of poliovirus. These challenges highlight the need to further strengthen surveillance and outbreak responses, particularly in the African Region (AFRO). Phylogeographic visualization tools may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective supplementary immunization activities and improved outbreak response and surveillance. We created a comprehensive protocol for the phylogeographic analysis of polioviruses using Nextstrain, a powerful open-source tool for real-time interactive visualization of virus sequencing data. It is expected that this protocol will support poliovirus elimination strategies in AFRO and contribute significantly to global eradication strategies. These tools have been utilized for other pathogens of public health importance, for example, SARS-CoV-2, human influenza, Ebola, and Mpox, among others, through real-time tracking of pathogen evolution (https://nextstrain.org), harnessing the scientific and public health potential of pathogen genome data. Key features • Employs Nextstrain (https://nextstrain.org), which is an open-source tool for real-time interactive visualization of genome sequencing datasets. • First comprehensive protocol for the phylogeographic analysis of poliovirus sequences collected from countries in the World Health Organization (WHO) African Region (AFRO). • Phylogeographic visualization may provide insights into changes in poliovirus epidemiology, which can in turn guide the implementation of more strategic and effective vaccination campaigns. • This protocol can be deployed locally on a personal computer or on a Microsoft Azure cloud server for high throughput.

RevDate: 2025-07-16

Shyam Sundar Bhuvaneswari VS, M Thangamuthu (2025)

Towards Intelligent Safety: A Systematic Review on Assault Detection and Technologies.

Sensors (Basel, Switzerland), 25(13):.

This review of literature discusses the use of emerging technologies in the prevention of assault, specifically Artificial Intelligence (AI), the Internet of Things (IoT), and wearable technologies. In preventing assaults, GIS-based mobile apps, wearable safety devices, and personal security solutions have been designed to improve personal security, especially for women and the vulnerable. The paper also analyzes interfacing networks, such as edge computing, cloud databases, and security frameworks required for emergency response solutions. In addition, we introduced a framework that brings these technologies together to deliver an effective response system. This review seeks to identify gaps currently present, ascertain major challenges, and suggest potential directions for enhanced personal security with the use of technology.

RevDate: 2025-07-16

Roumeliotis AJ, Myritzis E, Kosmatos E, et al (2025)

Multi-Area, Multi-Service and Multi-Tier Edge-Cloud Continuum Planning.

Sensors (Basel, Switzerland), 25(13):.

This paper presents the optimal planning of multi-area, multi-service, and multi-tier edge-cloud environments. The goal is to evaluate the regional deployment of the compute continuum, i.e., the type and number of processing devices, their pairing with a specific tier and task among different areas subject to processing, rate, and latency requirements. Different offline compute continuum planning approaches are investigated and detailed analysis related to various design choices is depicted. We study one scheme using all tasks at once and two others using smaller task batches. The latter both iterative schemes finish once all task groups have been traversed. Group-based approaches are presented as dealing with potentially excessive execution times for real-world sized problems. Solutions are provided for continuum planning using both direct complex and simpler, faster methods. Results show that processing all tasks simultaneously yields better performance but requires longer execution, while medium-sized batches achieve good performance faster. Thus, the batch-oriented schemes are capable of handling larger problem sizes. Moreover, the task selection strategy in group-based schemes influences the performance. A more detailed analysis is performed in the latter case, and different clustering methods are also considered. Based on our simulations, random selection of tasks in group-based approaches achieves better performance in most cases.

RevDate: 2025-07-13

Ahmmad J, El-Wahed Khalifa HA, Waqas HM, et al (2025)

Ranking data privacy techniques in cloud computing based on Tamir's complex fuzzy Schweizer-Sklar aggregation approach.

Scientific reports, 15(1):24943 pii:10.1038/s41598-025-09557-z.

In the era of cloud computing, it has become an important challenge to secure data privacy by storing and processing massive amounts of sensitive information in shared environments. Cloud platforms have become a necessary component for managing personal, commercial, and governmental data. Thus, the demand for effective data privacy techniques within cloud security frameworks has increased. Data privacy is no longer just an exercise in compliance but rather to reassure stakeholders and protect precious information from cyber-attacks. The decision-making (DM) landscape in the case of cloud providers, therefore, is extremely complex because they would need to select the optimal approach among the very wide gamut of privacy techniques, which range from encryption to anonymization. A novel complex fuzzy Schweizer-Sklar aggregation approach can rank and prioritize data privacy techniques and is particularly suitable for cloud settings. Our method can easily deal with uncertainties and multi-dimensional aspects of privacy evaluation. In this manuscript, first, we introduce the fundamental Schweizer-Sklar operational laws for a cartesian form of complex fuzzy framework. Then relying on these operational laws, we have initiated the notions of cartesian form of complex fuzzy Schweizer-Sklar power average and complex fuzzy Schweizer-Sklar power geometric AOs. We have developed the main properties related to these notions like Idempotency, Boundedness, and monotonicity. Also, we explored an algorithm for the utilization of the developed theory. Moreover, we provided an illustrative example and case study for the developed theory to show the ranking of data privacy techniques in cloud computing. At the end of the manuscript, we discuss the comparative analysis to show the supremacy of the introduced work.

RevDate: 2025-07-13

Adabi V, Etedali HR, Azizian A, et al (2025)

Aqua-MC as a simple open access code for uncountable runs of AquaCrop.

Scientific reports, 15(1):24975.

Understanding uncertainty in crop modeling is essential for improving prediction accuracy and decision-making in agricultural management. Monte Carlo simulations are widely used for uncertainty and sensitivity analysis, but their application to closed-source models like AquaCrop presents significant challenges due to the lack of direct access to source code. This study introduces Aqua-MC, an automated framework designed to facilitate Monte Carlo simulations in AquaCrop by integrating probabilistic parameter selection, iterative execution, and uncertainty quantification within a structured workflow. To demonstrate its effectiveness, Aqua-MC was applied to wheat yield modeling in Qazvin, Iran, where parameter uncertainty was assessed using 3000 Monte Carlo simulations. The DYNIA (Dynamic Identifiability Analysis) method was employed to evaluate the time-dependent sensitivity of 47 model parameters, providing insights into the temporal evolution of parameter influence. The results revealed that soil evaporation and yield predictions exhibited the highest uncertainty, while transpiration and biomass outputs were more stable. The study also highlighted that many parameters had low impact, suggesting that reducing the number of free parameters could enhance model efficiency. Despite its advantages, Aqua-MC has some limitations, including its computational intensity and reliance on the GLUE method, which may overestimate uncertainty bounds. To improve applicability, future research should focus on parallel computing, cloud-based execution, integration with machine learning techniques, and expanding Aqua-MC to multi-crop studies. By overcoming the limitations of closed-source models, Aqua-MC provides a scalable and efficient solution for performing large-scale uncertainty analysis in crop modeling.

RevDate: 2025-07-13
CmpDate: 2025-07-10

AlArnaout Z, Zaki C, Kotb Y, et al (2025)

Exploiting heart rate variability for driver drowsiness detection using wearable sensors and machine learning.

Scientific reports, 15(1):24898.

Driver drowsiness is a critical issue in transportation systems and a leading cause of traffic accidents. Common factors contributing to accidents include intoxicated driving, fatigue, and sleep deprivation. Drowsiness significantly impairs a driver's response time, awareness, and judgment. Implementing systems capable of detecting and alerting drivers to drowsiness is therefore essential for accident prevention. This paper examines the feasibility of using heart rate variability (HRV) analysis to assess driver drowsiness. It explores the physiological basis of HRV and its correlation with drowsiness. We propose a system model that integrates wearable devices equipped with photoplethysmography (PPG) sensors, transmitting data to a smartphone and then to a cloud server. Two novel algorithms are developed to segment and label features periodically, predicting drowsiness levels based on HRV derived from PPG signals. The proposed approach is evaluated using real-driving data and supervised machine learning techniques. Six classification algorithms are applied to labeled datasets, with performance metrics such as accuracy, precision, recall, F1-score, and runtime assessed to determine the most effective algorithm for timely drowsiness detection and driver alerting. Our results demonstrate that the Random Forest (RF) classifier achieves the highest testing accuracy (86.05%), precision (87.16%), recall (93.61%), and F1-score (89.02%) with the smallest mean change between training and testing datasets (-4.30%), highlighting its robustness for real-world deployment. The Support Vector Machine with Radial Basis Function (SVM-RBF) also shows strong generalization performance, with a testing F1-score of 87.15% and the smallest mean change of -3.97%. These findings suggest that HRV-based drowsiness detection systems can be effectively integrated into Advanced Driver Assistance Systems (ADAS) to enhance driver safety by providing timely alerts, thereby reducing the risk of accidents caused by drowsiness.

RevDate: 2025-07-12

Feng K, D Haridas (2025)

A unified model integrating UTAUT-Behavioural intension and Object-Oriented approaches for sustainable adoption of Cloud-Based collaborative platforms in higher education.

Scientific reports, 15(1):24767.

In recent years, cloud computing (CC) services have expanded rapidly, with platforms like Google Drive, Dropbox and Apple iCloud and gaining global adoption. This study evolves a predictive model to identify the key factors that influencing Jordanian academics' behavioral intention to adopt sustainable cloud-based collaborative systems (SCBCS). By integrating Unified Theory of Acceptance and Use of Technology (UTAUT) along with system design methodologies, we put forward a comprehensive research model to improve the adoption and efficiency of SCBCS in developing countries. By using cross-sectional data from 500 professors in Jordanian higher education institutions, we adapt and extend the UTAUT model to describe behavioral intention and also assess its impact on teaching and learning processes. Both exploratory and confirmatory analyses exhibits that expanded UTAUT model significantly improves the variance explained in behavioral intention. This Study key findings reveal that behavioral control, effort expectancy and social influence significantly impact attitudes towards using cloud services and also contributes to sustainable development goals by promoting the adoption of energy-efficient and resource-optimized cloud-based platforms in higher education. The findings provide actionable insights for policymakers and educators to improve sustainable technology adoption in developing countries, ultimately improving the quality and sustainability of educational processes.

RevDate: 2025-07-11
CmpDate: 2025-07-09

Wang Z, Ding T, Liang S, et al (2025)

Workpiece surface defect detection based on YOLOv11 and edge computing.

PloS one, 20(7):e0327546.

The rapid development of modern industry has significantly raised the demand for workpieces. To ensure the quality of workpieces, workpiece surface defect detection has become an indispensable part of industrial production. Most workpiece surface defect detection technologies rely on cloud computing. However, transmitting large volumes of data via wireless networks places substantial computational burdens on cloud servers, significantly reducing defect detection speed. Therefore, to enable efficient and precise detection, this paper proposes a workpiece surface defect detection method based on YOLOv11 and edge computing. First, the NEU-DET dataset was expanded using random flipping, cropping, and the self-attention generative adversarial network (SA-GAN). Then, the accuracy indicators of the YOLOv7-YOLOv11 models were compared on NEU-DET and validated on the Tianchi aluminium profile surface defect dataset. Finally, the cloud-based YOLOv11 model, which achieved the highest accuracy, was converted to the edge-based YOLOv11-RKNN model and deployed on the RK3568 edge device to improve the detection speed. Results indicate that YOLOv11 with SA-GAN achieved mAP@0.5 improvements of 7.7%, 3.1%, 5.9%, and 7.0% over YOLOv7, YOLOv8, YOLOv9, and YOLOv10, respectively, on the NEU-DET dataset. Moreover, YOLOv11 with SA-GAN achieved an 87.0% mAP@0.5 on the Tianchi aluminium profile surface defect dataset, outperforming the other models again. This verifies the generalisability of the YOLOv11 model. Additionally, quantising and deploying YOLOv11 on the edge device reduced its size from 10,156 kB to 4,194 kB and reduced its single-image detection time from 52.1ms to 33.6ms, which represents a significant efficiency enhancement.

RevDate: 2025-07-28
CmpDate: 2025-07-09

Park J, Lee S, Park G, et al (2025)

Mental health help-seeking behaviours of East Asian immigrants: a scoping review.

European journal of psychotraumatology, 16(1):2514327.

Background: The global immigrant population is increasing annually, and Asian immigrants have a substantial representation within the immigrant population. Due to a myriad of challenges such as acculturation, discrimination, language, and financial issues, immigrants are at high risk of mental health conditions. However, a large-scale mapping of the existing literature regarding these issues has yet to be completed.Objective: This study aimed to investigate the mental health conditions, help-seeking behaviours, and factors affecting mental health service utilization among East Asian immigrants residing in Western countries.Method: This study adopted the scoping review methodology based on the Joanna Briggs Institute framework. A comprehensive database search was conducted in May 2024 in PubMed, CINAHL, Embase, Cochrane, and Google Scholar. Search terms were developed based on participants, concept, context framework. The participants were East Asian immigrants and their families, and the concept of interest was mental health help-seeking behaviours and mental health service utilization. Regarding the context, studies targeting East Asian immigrants in Western countries were included. Data were summarized narratively and presented in a tabular and word cloud format.Results: Out of 1990 studies, 31 studies were included. East Asian immigrants often face mental health conditions, including depression, anxiety, and suicidal behaviours. They predominantly sought help from informal sources such as family, friends, religion, and complementary or alternative medicine, rather than from formal sources such as mental health clinics or healthcare professionals. Facilitators of seeking help included recognizing the need for professional help, experiencing severe symptoms, higher levels of acculturation, longer length of stay in the host country. Barriers included stigma, cultural beliefs, and language barriers.Conclusions: The review emphasizes the need for culturally tailored interventions to improve mental health outcomes in this vulnerable population. These results can guide future research and policymaking to address mental health disparities in immigrant communities.

RevDate: 2025-08-11

Ran S, Guo Y, Liu Y, et al (2025)

A 4×256 Gbps silicon transmitter with on-chip adaptive dispersion compensation.

Nature communications, 16(1):6268.

The exponential growth of data traffic propelled by cloud computing and artificial intelligence necessitates advanced optical interconnect solutions. While wavelength division multiplexing (WDM) enhances optical module transmission capacity, chromatic dispersion becomes a critical limitation as single-lane rates exceed 200 Gbps. Here we demonstrate a 4-channel silicon transmitter achieving 1 Tbps aggregate data rate through integrated adaptive dispersion compensation. This transmitter utilizes Mach-Zehnder modulators with adjustable input intensity splitting ratios, enabling precise control over the chirp magnitude and sign to counteract specific dispersion. At 1271 nm (-3.99 ps/nm/km), the proposed transmitter enabled 4 × 256 Gbps transmission over 5 km fiber, achieving bit error ratio below both the soft-decision forward-error correction threshold with feed-forward equalization (FFE) alone and the hard-decision forward-error correction threshold when combining FFE with maximum-likelihood sequence detection. Our results highlight a significant leap towards scalable, energy-efficient, and high-capacity optical interconnects, underscoring its potential in future local area network WDM applications.

RevDate: 2025-07-08
CmpDate: 2025-07-05

Damera VK, Cheripelli R, Putta N, et al (2025)

Enhancing remote patient monitoring with AI-driven IoMT and cloud computing technologies.

Scientific reports, 15(1):24088.

The rapid advancement of the Internet of Medical Things (IoMT) has revolutionized remote healthcare monitoring, enabling real-time disease detection and patient care. This research introduces a novel AI-driven telemedicine framework that integrates IoMT, cloud computing, and wireless sensor networks for efficient healthcare monitoring. A key innovation of this study is the Transformer-based Self-Attention Model (TL-SAM), which enhances disease classification by replacing conventional convolutional layers with transformer layers. The proposed TL-SAM framework effectively extracts spatial and spectral features from patient health data, optimizing classification accuracy. Furthermore, the model employs an Improved Wild Horse Optimization with Levy Flight Algorithm (IWHOLFA) for hyperparameter tuning, enhancing its predictive performance. Real-time biosensor data is collected and transmitted to an IoMT cloud repository, where AI-driven analytics facilitate early disease diagnosis. Extensive experimentation on the UCI dataset demonstrates the superior accuracy of TL-SAM compared to conventional deep learning models, achieving an accuracy of 98.62%, precision of 97%, recall of 98%, and F1-score of 97%. The study highlights the effectiveness of AI-enhanced IoMT systems in reducing healthcare costs, improving early disease detection, and ensuring timely medical interventions. The proposed approach represents a significant advancement in smart healthcare, offering a scalable and efficient solution for remote patient monitoring and diagnosis.

RevDate: 2025-07-05

Cabello J, Escudero-Clares M, Martos-Rosillo S, et al (2025)

A dataset on potentially groundwater-dependent vegetation in the Sierra Nevada Protected Area (Southern Spain) and its underlying NDVI-derived ecohydrological attributes.

Data in brief, 61:111760.

This dataset provides a spatially explicit classification of potentially groundwater-dependent vegetation (pGDV) in the Sierra Nevada Protected Area (Southern Spain), generated using Sentinel-2 imagery (2019-2023) and ecohydrological attributes derived from NDVI time series. NDVI metrics were calculated from cloud- and snow-filtered Sentinel-2 Level 2A images processed in Google Earth Engine. Monthly NDVI values were used to extract three ecohydrological indicators: dry-season NDVI, dry-wet seasonal NDVI difference, and interannual NDVI variability. Based on quartile classifications of these indicators, 64 ecohydrological vegetation classes were defined. These were further clustered into three levels of potential groundwater dependence using hierarchical clustering techniques, differentiating between alpine and lower-elevation aquifer zones. The dataset includes raster layers (GeoTIFF) of the ecohydrological classes and pGDV types at 10 m spatial resolution, a CSV file with descriptive statistics for each class, and complete metadata. All spatial layers are projected in ETRS89 / UTM Zone 30N (EPSG: 25830) and are ready for visualization and analysis in standard GIS platforms. Partial validation of the classification was performed using spring location data and the distribution of hygrophilous plant species from official conservation databases. This available dataset enables reproducible analysis of vegetation-groundwater relationships in dryland mountain ecosystems. It supports comparative research across regions, facilitates the study of groundwater buffering effects on vegetation function, and offers a transferable framework for ecohydrological classification based on remote sensing. The data can be reused to inform biodiversity conservation, groundwater management, and climate change adaptation strategies in the Mediterranean and other water-limited mountain regions.

RevDate: 2025-07-05

Xing S, Sun A, Wang C, et al (2025)

Seamless optical cloud computing across edge-metro network for generative AI.

Nature communications, 16(1):6097.

The rapid advancement of generative artificial intelligence (AI) in recent years has profoundly reshaped modern lifestyles, necessitating a revolutionary architecture to support the growing demands for computational power. Cloud computing has become the driving force behind this transformation. However, it consumes significant power and faces computation security risks due to the reliance on extensive data centers and servers in the cloud. Reducing power consumption while enhancing computational scale remains persistent challenges in cloud computing. Here, we propose and experimentally demonstrate an optical cloud computing system that can be seamlessly deployed across edge-metro network. By modulating inputs and models into light, a wide range of edge nodes can directly access the optical computing center via the edge-metro network. The experimental validations show an energy efficiency of 118.6 mW/TOPs (tera operations per second), reducing energy consumption by two orders of magnitude compared to traditional electronic-based cloud computing solutions. Furthermore, it is experimentally validated that this architecture can perform various complex generative AI models through parallel computing to achieve image generation tasks.

RevDate: 2025-07-04
CmpDate: 2025-07-02

Meiring C, Eygelaar M, Fourie J, et al (2025)

Tick genomics through a Nanopore: a low-cost approach for tick genomics.

BMC genomics, 26(1):591.

BACKGROUND: The assembly of large and complex genomes can be costly since it typically requires the utilization of multiple sequencing technologies and access to high-performance computing, while creating a dependency on external service providers. The aim of this study was to independently generate draft genomes for the cattle ticks Rhipicephalus microplus and R. appendiculatus using Oxford Nanopore sequencing technology.

RESULTS: Exclusively, Oxford Nanopore sequence data were assembled with Shasta and finalized on the Amazon Web Services cloud platform, capitalizing on the availability of up to 90% discounted Spot instances. The assembled and polished R. microplus and R. appendiculatus genomes from our study were comparable to published tick genomes where multiple sequencing technologies and costly bioinformatic resources were utilized that are not readily accessible to low-resource environments. We predicted 52,412 genes for R. appendiculatus, with 31,747 of them being functionally annotated. The R. microplus annotation consisted of 60,935 predicted genes, with 32,263 being functionally annotated in the final file. The sequence data were also used to assemble and annotate genetically distinct Coxiella-like endosymbiont genomes for each tick species. The results indicated that each of the endosymbionts exhibited genome reductions. The Nanopore Q20 + library kit and flow cell were used to sequence the > 80% AT-rich mitochondrial DNA of both tick species. The sequencing generated accurate mitochondrial genomes, encountering imperfect base calling only in homopolymer regions exceeding 10 bases.

CONCLUSION: This study presents an alternative approach for smaller laboratories with limited budgets to enter the field and participate in genomics without capital intensive investments, allowing for capacity building in a field normally exclusively accessible through collaboration and large funding opportunities.

RevDate: 2025-07-04

Rajammal K, M Chinnadurai (2025)

Dynamic load balancing in cloud computing using predictive graph networks and adaptive neural scheduling.

Scientific reports, 15(1):22181 pii:10.1038/s41598-025-97494-2.

Load balancing is one of the significant challenges in cloud environments due to the heterogeneity, dynamic nature of resource states and workloads. The traditional load balancing procedures struggle to adapt the real-time variations which leads to inefficient resource utilization and increased response times. To overcome these issues, a novel approach is presented in this research work utilizing Spiking Neural Networks (SNNs) for adaptive decision-making and Temporal Graph Neural Networks (TGNNs) for dynamic resource state modeling. The proposed SNN model identifies the short-term workload fluctuations and long-term trends whereas TGNN represents the cloud environment as a dynamic graph to predict future resource availability. Additionally, reinforcement learning is incorporated in the proposed work to optimize SNN decisions based on feedback from the TGNN's state predictions. Experimental evaluations of the proposed model with diverse workload scenarios demonstrate significant improvements in terms of throughput, energy efficiency, make span and response time. Additionally, comparative analyses with existing optimization algorithms exhibit the proposed model ability in managing the loads in cloud computing. The results exhibit the 20% higher throughput, reduced makespan by 35%, minimized response time by 40%, and lowered energy consumption by 30-40% of the proposed model compared to the existing methods.

RevDate: 2025-07-04

Cui J, Shi L, A Alkhayyat (2025)

Enhanced security for IoT cloud environments using EfficientNet and enhanced football team training algorithm.

Scientific reports, 15(1):20764.

The growing implementation of Internet of Things (IoT) technology has resulted in a significant increase in the number of connected devices, thereby exposing IoT-cloud environments to a range of cyber threats. As the number of IoT devices continues to grow, the potential attack surface also enlarges, complicating the task of securing these systems. This paper introduces an innovative approach to intrusion detection that integrates EfficientNet with a newly refined metaheuristic known as the Enhanced Football Team Training Algorithm (EFTTA). The proposed EfficientNet/EFTTA model aims to identify anomalies and intrusions in IoT-cloud environments with enhanced accuracy and efficiency. The effectiveness of this model is measured using a standard dataset and is compared against some other methods during performance metrics. The results indicate that the proposed method surpasses existing techniques, demonstrating improved accuracy over 98.56% for NSL-KDD and 99.1% for BoT-IoT in controlled experiments for the protection of IoT-cloud infrastructures.

RevDate: 2025-07-04
CmpDate: 2025-07-02

Bhattacharya P, Mukherjee A, Bhushan B, et al (2025)

A secured remote patient monitoring framework for IoMT ecosystems.

Scientific reports, 15(1):22882.

Recent advancement in the Internet of Medical Things (IoMT) allows patients to set up smart sensors and medical devices to connect to remote healthcare setups. However, existing remote patient monitoring solutions predominantly rely on persistent connectivity and centralized cloud processing, resulting in high latency and energy consumption, particularly in environments with intermittent network availability. There is a need for real-time IoMT computing closer to the dew, with secured and privacy-enabled access to healthcare data. To address this, we propose the DeW-IoMT framework, which includes a dew layer in the roof-fog-cloud systems. Notably, our approach introduces a novel roof computing layer that acts as an intermediary gateway between the dew and fog layers, enhancing data security and reducing communication latency. The proposed architecture provides critical services during disconnected operations and minimizes computational requirements for the fog-cloud system. We measure heart rate using the pulse sensor, where the dew layer sets up conditions for remote patient monitoring with low overheads. We experimentally analyze the proposed scheme's response time, energy dissipation, and bandwidth and present a simulation analysis of the fog layer through the iFogSim software. Our results at dew demonstrate a reduction in response time by 74.61%, a decrease in energy consumption by 38.78%, and a 33.56% reduction in task data compared to traditional cloud-centric models. Our findings validate the framework viability in scalable IoMT setups.

RevDate: 2025-07-05

Sun Y, Zhang Y, Hao J, et al (2025)

Agricultural greenhouses datasets of 2010, 2016, and 2022 in China.

Scientific data, 12(1):1107.

China has built the world's largest area of agricultural greenhouse to meet the requirements of climate change and dietary structure changes. Accurate and timely access to information on agricultural greenhouse space is crucial for effectively managing and improving the quality of agricultural production. However, high-quality, high-resolution data on Chinese agricultural greenhouses are still lacking due to difficulties in identification and an insufficient number of representative training data. This study aimed to propose a method for identifying agricultural greenhouse spectral and texture information based on key growth stages using the Google Earth Engine (GEE) cloud platform, Landsat 7 remote sensing images, and combined field surveys and visual interpretation to collect a large number of samples. This method used a random forest classifier to extract spatial information from remote sensing data to create classification datasets of Chinese agricultural greenhouses in 2010, 2016, and 2022. The overall accuracy reached 97%, with a kappa coefficient of 0.82. This dataset may help researchers and decision-makers further develop research and management in facility agriculture.

RevDate: 2025-07-04
CmpDate: 2025-07-02

Nyakuri JP, Nkundineza C, Gatera O, et al (2025)

AI and IoT-powered edge device optimized for crop pest and disease detection.

Scientific reports, 15(1):22905.

Climate change exacerbates the challenges of maintaining crop health by influencing invasive pest and disease infestations, especially for cereal crops, leading to enormous yield losses. Consequently, innovative solutions are needed to monitor crop health from early development stages through harvesting. While various technologies, such as the Internet of Things (IoT), machine learning (ML), and artificial intelligence (AI), have been used, portable, cost-effective, and energy-efficient solutions suitable for resource-constrained environments such as edge applications in agriculture are needed. This study presents the development of a portable smart IoT device that integrates a lightweight convolutional neural network (CNN), called Tiny-LiteNet, optimized for edge applications with built-in support of model explainability. The system consists of a high-definition camera for real-time plant image acquisition, a Raspberry-Pi 5 integrated with the Tiny-LiteNet model for edge processing, and a GSM/GPRS module for cloud communication. The experimental results demonstrated that Tiny-LiteNet achieved up to 98.6% accuracy, 98.4% F1-score, 98.2% Recall, 80 ms inference time, while maintaining a compact model size of 1.2 MB with 1.48 million parameters, outperforming traditional CNN architectures such as VGGNet-16, Inception, ResNet50, DenseNet121, MobileNetv2, and EfficientNetB0 in terms of efficiency and suitability for edge computing. Additionally, the low power consumption and user-friendly design of this smart device make it a practical tool for farmers, enabling real-time pest and disease detection, promoting sustainable agriculture, and enhancing food security.

RevDate: 2025-07-01
CmpDate: 2025-07-01

Abbasi SF, Ahmad R, Mukherjee T, et al (2025)

A Novel and Secure 3D Colour Medical Image Encryption Technique Using 3D Hyperchaotic Map, S-box and Discrete Wavelet Transform.

Studies in health technology and informatics, 328:268-272.

Over the past two decades, there has been a substantial increase in the use of the Internet of Medical Things (IoMT). In the smart healthcare setting, patients' data can be quickly collected, stored and processed through insecure medium such as the internet or cloud computing. To address this issue, researchers have developed a range of encryption algorithms to protect medical image data, however these remain vulnerable to brute force and differential cryptanalysis attacks by eavesdroppers. In this study, we propose an efficient approach to enhance the security of medical image transmission by transforming the ciphertext image into a visually meaningful image. The proposed algorithm uses a 3D hyperchaotic system to generate three chaotic sequences for permutation and diffusion, followed by the application of a substitution box (S Box) to increase redundancy. Additionally, the proposed study employed discrete wavelet transform (DWT) to transform ciphertext image into a visually meaningful image. This final image is not only secure but also improves its resistance to cyberattacks. The proposed encryption model demonstrates strong security performance, with key metrics including Unified Average Changing Intensity (UACI) of 36.17% and Number of Pixels Change Rate (NPCR) of 99.57%, highlighting its effectiveness in ensuring secure medical image transmission.

RevDate: 2025-07-01
CmpDate: 2025-07-01

Drabo C, S Malo (2025)

Fog-Enabled Modular Deep Learning Platform for Textual Data Mining in Healthcare for Pathology Detection in Burkina Faso.

Studies in health technology and informatics, 328:173-177.

In this paper, we propose an architecture for a deep-learning based medical diagnosis support platform in Burkina Faso. This model is built by merging the diagnosis and treatment guide with models derived from textual data recovered via optical character recognition (OCR) on handwritten prescriptions and data from electronic health records. Through simulation, we compared two architectures adapted to the Burkinabe context - a fog-based architecture and cloud-based architecture - and the validated one is the solution best suited to the organization of the country's health system.

RevDate: 2025-07-02

Brittain JS, Tsui J, Inward R, et al (2025)

GRAPEVNE - Graphical Analytical Pipeline Development Environment for Infectious Diseases.

Wellcome open research, 10:279.

The increase in volume and diversity of relevant data on infectious diseases and their drivers provides opportunities to generate new scientific insights that can support 'real-time' decision-making in public health across outbreak contexts and enhance pandemic preparedness. However, utilising the wide array of clinical, genomic, epidemiological, and spatial data collected globally is difficult due to differences in data preprocessing, data science capacity, and access to hardware and cloud resources. To facilitate large-scale and routine analyses of infectious disease data at the local level (i.e. without sharing data across borders), we developed GRAPEVNE (Graphical Analytical Pipeline Development Environment), a platform enabling the construction of modular pipelines designed for complex and repetitive data analysis workflows through an intuitive graphical interface. Built on the Snakemake workflow management system, GRAPEVNE streamlines the creation, execution, and sharing of analytical pipelines. Its modular approach already supports a diverse range of scientific applications, including genomic analysis, epidemiological modeling, and large-scale data processing. Each module in GRAPEVNE is a self-contained Snakemake workflow, complete with configurations, scripts, and metadata, enabling interoperability. The platform's open-source nature ensures ongoing community-driven development and scalability. GRAPEVNE empowers researchers and public health institutions by simplifying complex analytical workflows, fostering data-driven discovery, and enhancing reproducibility in computational research. Its user-driven ecosystem encourages continuous innovation in biomedical and epidemiological research but is applicable beyond that. Key use-cases include automated phylogenetic analysis of viral sequences, real-time outbreak monitoring, forecasting, and epidemiological data processing. For instance, our dengue virus pipeline demonstrates end-to-end automation from sequence retrieval to phylogeographic inference, leveraging established bioinformatics tools which can be deployed to any geographical context. For more details, see documentation at: https://grapevne.readthedocs.io.

RevDate: 2025-07-09
CmpDate: 2025-07-04

Smith SD, Velásquez-Zapata V, RP Wise (2025)

NGPINT V3: a containerized orchestration Python software for discovery of next-generation protein-protein interactions.

Bioinformatics (Oxford, England), 41(6):.

SUMMARY: Batch yeast two-hybrid (Y2H) assays, leveraged with next-generation sequencing, have afforded successful innovations for the analysis of protein-protein interactions. NGPINT is a Conda-based software designed to process the millions of raw sequencing reads resulting from Y2H-next-generation interaction screens. Over time, increasing compatibility and dependency issues have prevented clean NGPINT installation and operation. A system-wide update was essential to continue effective use with its companion software, Y2H-SCORES. We present NGPINT V3, a containerized implementation built with both Singularity and Docker, allowing accessibility across virtually any operating system and computing environment.

This update includes streamlined dependencies and container images hosted on Sylabs (https://cloud.sylabs.io/library/schuyler/ngpint/ngpint) and Dockerhub (https://hub.docker.com/r/schuylerds/ngpint), facilitating easier adoption and integration into high-throughput and cloud-computing workflows. Full instructions and software can be also found in the GitHub repository https://github.com/Wiselab2/NGPINT_V3 and Zenodo https://doi.org/10.5281/zenodo.15256036.

RevDate: 2025-07-08
CmpDate: 2025-06-27

Xiuqing W, Pirasteh S, Husain HJ, et al (2025)

Leveraging machine learning for monitoring afforestation in mining areas: evaluating Tata Steel's restoration efforts in Noamundi, India.

Environmental monitoring and assessment, 197(7):816 pii:10.1007/s10661-025-14294-x.

Mining activities have long been associated with significant environmental impacts, including deforestation, habitat degradation, and biodiversity loss, necessitating targeted strategies like afforestation to mitigate ecological damage. Tata Steel's afforestation initiative near its Noamundi iron ore mining site in Jharkhand, India, spanning 165.5 hectares with over 1.1 million saplings planted, is a critical case study for evaluating such restoration efforts. However, assessing the success of these initiatives requires robust, scalable methods to monitor land use changes over time, a challenge compounded by the need for accurate, cost-effective tools to validate ecological recovery and support environmental governance frameworks. This study introduces a novel approach by integrating multiple machine learning (ML) algorithms, classification and regression tree (CART), random forest, minimum distance, gradient tree boost, and Naive Bayes, with multi-temporal, multi-resolution satellite imagery (Landsat, Sentinel-2A, PlanetScope) on Google Earth Engine (GEE) to analyze land use dynamics in 1987, 2016, and 2022. In a novel application to such contexts, high-resolution PlanetScope data (3 m) and drone imagery were leveraged to validate classification accuracy using an 80:20 training-testing data split. The comparison of ML methods across varying spatial resolutions and temporal scales provides a methodological advancement for monitoring afforestation in mining landscapes, emphasizing reproducibility and precision. Results identified CART and Naive Bayes classifier classifiers as the most accurate (83% accuracy with PlanetScope 2022 data), effectively mapping afforestation progress and land use changes. These findings highlight the utility of ML-driven remote sensing in offering spatially explicit, cost-effective monitoring of restoration initiatives, directly supporting Environmental, Social, and Governance (ESG) reporting by enhancing transparency in ecological management.

RevDate: 2025-06-27

Badshah A, Banjar A, Habibullah S, et al (2025)

Social big data management through collaborative mobile, regional, and cloud computing.

PeerJ. Computer science, 11:e2689.

The crowd of smart devices surrounds us all the time. These devices popularize social media platforms (SMP), connecting billions of users. The enhanced functionalities of smart devices generate big data that overutilizes the mainstream network, degrading performance and increasing the overall cost, compromising time-sensitive services. Research indicates that about 75% of connections come from local areas, and their workload does not need to be migrated to remote servers in real-time. Collaboration among mobile edge computing (MEC), regional computing (RC), and cloud computing (CC) can effectively fill these gaps. Therefore, we propose a collaborative structure of mobile, regional, and cloud computing to address the issues arising from social big data (SBD). In this model, it may be easily accessed from the nearest device or server rather than downloading a file from the cloud server. Furthermore, instead of transferring each file to the cloud servers during peak hours, they are initially stored on a regional level and subsequently uploaded to the cloud servers during off-peak hours. The outcomes affirm that this approach significantly reduces the impact of substantial SBD on the performance of mainstream and social network platforms, specifically in terms of delay, response time, and cost.

RevDate: 2025-06-27

Zeng M, Mohamad Hashim MS, Ayob MN, et al (2025)

Intersection collision prediction and prevention based on vehicle-to-vehicle (V2V) and cloud computing communication.

PeerJ. Computer science, 11:e2846.

In modern transportation systems, the management of traffic safety has become increasingly critical as both the number and complexity of vehicles continue to rise. These systems frequently encounter multiple challenges. Consequently, the effective assessment and management of collision risks in various scenarios within transportation systems are paramount to ensuring traffic safety and enhancing road utilization efficiency. In this paper, we tackle the issue of intelligent traffic collision prediction and propose a vehicle collision risk prediction model based on vehicle-to-vehicle (V2V) communication and the graph attention network (GAT). Initially, the framework gathers vehicle trajectory, speed, acceleration, and relative position information via V2V communication technology to construct a graph representation of the traffic environment. Subsequently, the GAT model extracts interaction features between vehicles and optimizes the vehicle driving strategy through deep reinforcement learning (DRL), thereby augmenting the model's decision-making capabilities. Experimental results demonstrate that the framework achieves over 80% collision recognition accuracy concerning true warning rate on both public and real-world datasets. The metrics for false detection are thoroughly analyzed, revealing the efficacy and robustness of the proposed framework. This method introduces a novel technological approach to collision prediction in intelligent transportation systems and holds significant implications for enhancing traffic safety and decision-making efficiency.

RevDate: 2025-06-27

S S, JP P M (2025)

A novel dilated weighted recurrent neural network (RNN)-based smart contract for secure sharing of big data in Ethereum blockchain using hybrid encryption schemes.

PeerJ. Computer science, 11:e2930.

BACKGROUND: With the enhanced data amount being created, it is significant to various organizations and their processing, and managing big data becomes a significant challenge for the managers of the data. The development of inexpensive and new computing systems and cloud computing sectors gave qualified industries to gather and retrieve the data very precisely however securely delivering data across the network with fewer overheads is a demanding work. In the decentralized framework, the big data sharing puts a burden on the internal nodes among the receiver and sender and also creates the congestion in network. The internal nodes that exist to redirect information may have inadequate buffer ability to momentarily take the information and again deliver it to the upcoming nodes that may create the occasional fault in the transmission of data and defeat frequently. Hence, the next node selection to deliver the data is tiresome work, thereby resulting in an enhancement in the total receiving period to allocate the information.

METHODS: Blockchain is the primary distributed device with its own approach to trust. It constructs a reliable framework for decentralized control via multi-node data repetition. Blockchain is involved in offering a transparency to the application of transmission. A simultaneous multi-threading framework confirms quick data channeling to various network receivers in a very short time. Therefore, an advanced method to securely store and transfer the big data in a timely manner is developed in this work. A deep learning-based smart contract is initially designed. The dilated weighted recurrent neural network (DW-RNN) is used to design the smart contract for the Ethereum blockchain. With the aid of the DW-RNN model, the authentication of the user is verified before accessing the data in the Ethereum blockchain. If the authentication of the user is verified, then the smart contracts are assigned to the authorized user. The model uses elliptic Curve ElGamal cryptography (EC-EC), which is a combination of elliptic curve cryptography (ECC) and ElGamal encryption for better security, to make sure that big data transfers on the Ethereum blockchain are safe. The modified Al-Biruni earth radius search optimization (MBERSO) algorithm is used to make the best keys for this EC-EC encryption scheme. This algorithm manages keys efficiently and securely, which improves data security during blockchain operations.

RESULTS: The processes of encryption facilitate the secure transmission of big data over the Ethereum blockchain. Experimental analysis is carried out to prove the efficacy and security offered by the suggested model in transferring big data over blockchain via smart contracts.

RevDate: 2025-06-27

Salih S, Abdelmaboud A, Husain O, et al (2025)

IoT in urban development: insight into smart city applications, case studies, challenges, and future prospects.

PeerJ. Computer science, 11:e2816.

With the integration of Internet of Things (IoT) technology, smart cities possess the capability to advance their public transportation modalities, address prevalent traffic congestion challenges, refine infrastructure, and optimize communication frameworks, thereby augmenting their progression towards heightened urbanization. Through the integration of sensors, cell phones, artificial intelligence (AI), data analytics, and cloud computing, smart cities worldwide are evolving to be more efficient, productive, and responsive to their residents' needs. While the promise of smart cities has been marked over the past decade, notable challenges, especially in the realm of security, threaten their optimal realization. This research provides a comprehensive survey on IoT in smart cities. It focuses on the IoT-based smart city components. Moreover, it provides explanation for integrating different technologies with IoT for smart cities such as AI, sensing technologies, and networking technologies. Additionally, this study provides several case studies for smart cities. In addition, this study investigates the challenges of adopting IoT in smart cities and provides prevention methods for each challenge. Moreover, this study provides future directions for the upcoming researchers. It serves as a foundational guide for stakeholders and emphasizes the pressing need for a balanced integration of innovation and safety in the smart city landscape.

RevDate: 2025-06-26

S N, S D (2025)

Temporal fusion transformer-based strategy for efficient multi-cloud content replication.

PeerJ. Computer science, 11:e2713.

In cloud computing, ensuring the high availability and reliability of data is dominant for efficient content delivery. Content replication across multiple clouds has emerged as a solution to achieve the above. However, managing optimal replication while considering dynamic changes in data popularity and cloud resource availability remains a formidable challenge. In order to address these challenges, this article employs TFT-based Dynamic Data Replication Strategy (TD2RS), leveraging the Temporal Fusion Transformer (TFT), a deep learning temporal forecasting model. This proposed system collects historical data on content popularity and resource availability from multiple cloud sources, which are then used as input to TFT. Then TFT is used to capture temporal patterns and forecasts future data demands. An intelligent replication is performed to optimize content replication across multiple cloud environments based on these forecasts. The framework's performance was validated through extensive experiments using synthetic time-series data simulating with varied cloud resource characteristics. Some of the findings include that the proposed TFT approach improves the availability of data by 20% when compared to traditional replication techniques and also cuts down the latency level by 15%. These outcomes indicate that the TFT-based replication strategy targets to improve content delivery efficiency in the dynamic cloud computing environment, thus providing effective solution to dynamically address the availability, reliability, and performance challenges.

RevDate: 2025-06-27

Ravula V, M Ramaiah (2025)

Enhancing phishing detection with dynamic optimization and character-level deep learning in cloud environments.

PeerJ. Computer science, 11:e2640.

As cloud computing becomes increasingly prevalent, the detection and prevention of phishing URL attacks are essential, particularly in the Internet of Vehicles (IoV) environment, to maintain service reliability. In such a scenario, an attacker could send misleading phishing links, potentially compromising the system's functionality or, at worst, leading to a complete shutdown. To address these emerging threats, this study introduces a novel Dynamic Arithmetic Optimization Algorithm with Deep Learning-Driven Phishing URL Classification (DAOA-DLPC) model for cloud-enabled IoV infrastructure. The candidate's research utilizes character-level embeddings instead of word embeddings, as the former can capture intricate URL patterns more effectively. These embeddings are integrated with a deep learning model, the Multi-Head Attention and Bidirectional Gated Recurrent Units (MHA-BiGRU). To improve precision, hyperparameter tuning has been done using DAOA. The proposed method offers a feasible solution for identifying the phishing URLs, and the method achieves computational efficiency through the attention mechanism and dynamic hyperparameter optimization. The need for this work comes from the observation that the traditional machine learning approaches are not effective in dynamic environments like phishing threat landscapes in a dynamic environment such as the one of phishing threats. The presented DLPC approach is capable of learning new forms of phishing attacks in real time and reduce false positives. The experimental results show that the proposed DAOA-DLPC model outperforms the other models with an accuracy of 98.85%, recall of 98.49%, and F1-score of 98.38% and can effectively detect safe and phishing URLs in dynamic environments. These results imply that the proposed model is useful in distinguishing between safe and unsafe URLs than the conventional models.

RevDate: 2025-06-26

R A, M G (2025)

Improved salp swarm algorithm based optimization of mobile task offloading.

PeerJ. Computer science, 11:e2818.

BACKGROUND: The realization of computation-intensive applications such as real-time video processing, virtual/augmented reality, and face recognition becomes possible for mobile devices with the latest advances in communication technologies. This application requires complex computation for better user experience and real-time decision-making. However, the Internet of Things (IoT) and mobile devices have computational power and limited energy. Executing these computational-intensive tasks on edge devices may result in high energy consumption or high computation latency. In recent times, mobile edge computing (MEC) has been used and modernized for offloading this complex task. In MEC, IoT devices transmit their tasks to edge servers, which consecutively carry out faster computation.

METHODS: However, several IoT devices and edge servers put an upper limit on executing concurrent tasks. Furthermore, implementing a smaller size task (1 KB) over an edge server leads to improved energy consumption. Thus, there is a need to have an optimum range for task offloading so that the energy consumption and response time will be minimal. The evolutionary algorithm is the best for resolving the multiobjective task. Energy, memory, and delay reduction together with the detection of the offloading task is the multiobjective to achieve. Therefore, this study presents an improved salp swarm algorithm-based Mobile Application Offloading Algorithm (ISSA-MAOA) technique for MEC.

RESULTS: This technique harnesses the optimization capabilities of the improved salp swarm algorithm (ISSA) to intelligently allocate computing tasks between mobile devices and the cloud, aiming to concurrently minimize energy consumption, and memory usage, and reduce task completion delays. Through the proposed ISSA-MAOA, the study endeavors to contribute to the enhancement of mobile cloud computing (MCC) frameworks, providing a more efficient and sustainable solution for offloading tasks in mobile applications. The results of this research contribute to better resource management, improved user interactions, and enhanced efficiency in MCC environments.

RevDate: 2025-06-27

Ibrahim K, Sajid A, Ullah I, et al (2025)

Fuzzy inference rule based task offloading model (FI-RBTOM) for edge computing.

PeerJ. Computer science, 11:e2657.

The key objective of edge computing is to reduce delays and provide consumers with high-quality services. However, there are certain challenges, such as high user mobility and the dynamic environments created by IoT devices. Additionally, the limitations of constrained device resources impede effective task completion. The challenge of task offloading plays a crucial role as one of the key challenges for edge computing, which is addressed in this research. An efficient rule-based task-offloading model (FI-RBTOM) is proposed in this context. The key decision of the proposed model is to choose either the task to be offloaded over an edge server or the cloud server or it can be processed over a local node. The four important input parameters are bandwidth, CPU utilization, task length, and task size. The proposed (FI-RBTOM), simulation is carried out using MATLAB (fuzzy logic) tool with 75% training and 25% testing with an overall error rate of 0.39875 is achieved.

RevDate: 2025-06-26

Sang Y, Guo Y, Wang B, et al (2025)

Diversified caching algorithm with cooperation between edge servers.

PeerJ. Computer science, 11:e2824.

Edge computing makes up for the high latency of the central cloud network by deploying server resources in close proximity to users. The storage and other resources configured by edge servers are limited, and a reasonable cache replacement strategy is conducive to improving the cache hit ratio of edge services, thereby reducing service latency and enhancing service quality. The spatiotemporal correlation of user service request distribution brings opportunities and challenges to edge service caching. The collaboration between edge servers is often ignored in the existing research work for caching decisions, which can easily lead to a low edge cache hit rate, thereby reducing the efficiency of edge resource use and service quality. Therefore, this article proposes a diversified caching method to ensure the diversity of edge cache services, utilizing inter-server collaboration to enhance the cache hit rate. After the service request reaches the server, if it misses, the proposed algorithm will judge whether the neighbor node can provide services through the cache information of the neighbor node, and then the server and the neighbor node jointly decide how to cache the service. At the same time, the performance of the proposed diversified caching method is evaluated through a large number of simulation experiments, and the experimental results show that the proposed method can improve the cache hit rate by 27.01-37.43%, reduce the average service delay by 25.57-30.68%, and with the change of the scale of the edge computing platform, the proposed method can maintain good performance.

RevDate: 2025-06-25

Tran-Van NY, KH Le (2025)

A multimodal skin lesion classification through cross-attention fusion and collaborative edge computing.

Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society, 124:102588 pii:S0895-6111(25)00097-7 [Epub ahead of print].

Skin cancer is a significant global health concern requiring early and accurate diagnosis to improve patient outcomes. While deep learning-based computer-aided diagnosis (CAD) systems have emerged as effective diagnostic support tools, they often face three key limitations: low diagnostic accuracy due to reliance on single-modality data (e.g., dermoscopic images), high network latency in cloud deployments, and privacy risks from transmitting sensitive medical data to centralized servers. To overcome these limitations, we propose a unified solution that integrates a multimodal deep learning model with a collaborative inference scheme for skin lesion classification. Our model enhances diagnostic accuracy by fusing dermoscopic images with patient metadata via a novel cross-attention-based feature fusion mechanism. Meanwhile, the collaborative scheme distributes computational tasks across IoT and edge devices, reducing latency and enhancing data privacy by processing sensitive information locally. Our experiments on multiple benchmark datasets demonstrate the effectiveness of this approach and its generalizability, such as achieving a classification accuracy of 95.73% on the HAM10000 dataset, outperforming competitors. Furthermore, the collaborative inference scheme significantly improves efficiency, achieving latency speedups of up to 20% and 47% over device-only and edge-only schemes.

LOAD NEXT 100 CITATIONS

RJR Experience and Expertise

Researcher

Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.

Educator

Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.

Administrator

Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.

Technologist

Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.

Publisher

While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.

Speaker

Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.

Facilitator

Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.

Designer

Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.

Support this website:
Order from Amazon
We will earn a commission.

This is a must read book for anyone with an interest in invasion biology. The full title of the book lays out the author's premise — The New Wild: Why Invasive Species Will Be Nature's Salvation. Not only is species movement not bad for ecosystems, it is the way that ecosystems respond to perturbation — it is the way ecosystems heal. Even if you are one of those who is absolutely convinced that invasive species are actually "a blight, pollution, an epidemic, or a cancer on nature", you should read this book to clarify your own thinking. True scientific understanding never comes from just interacting with those with whom you already agree. R. Robbins

963 Red Tail Lane
Bellingham, WA 98226

206-300-3443

E-mail: RJR8222@gmail.com

Collection of publications by R J Robbins

Reprints and preprints of publications, slide presentations, instructional materials, and data compilations written or prepared by Robert Robbins. Most papers deal with computational biology, genome informatics, using information technology to support biomedical research, and related matters.

Research Gate page for R J Robbins

ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. According to a study by Nature and an article in Times Higher Education , it is the largest academic social network in terms of active users.

Curriculum Vitae for R J Robbins

short personal version

Curriculum Vitae for R J Robbins

long standard version

RJR Picks from Around the Web (updated 11 MAY 2018 )